Test Report: Docker_Linux_crio 12230

                    
                      4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0:2021-08-10:19925
                    
                

Test fail (13/237)

x
+
TestAddons/parallel/Ingress (305.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:340: "ingress-nginx-admission-create-2hdx9" [7510e398-a262-4344-a133-6707c967bf76] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 3.465001ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210810222001-345780 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210810222001-345780 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:340: "nginx" [2bd7593c-823b-4e3e-aa49-8c8717b4cdaa] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:340: "nginx" [2bd7593c-823b-4e3e-aa49-8c8717b4cdaa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:340: "nginx" [2bd7593c-823b-4e3e-aa49-8c8717b4cdaa] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.006696362s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810222001-345780 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210810222001-345780 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.242017861s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:224: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210810222001-345780 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810222001-345780 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210810222001-345780 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.784664569s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:262: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810222001-345780 addons disable ingress --alsologtostderr -v=1
addons_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p addons-20210810222001-345780 addons disable ingress --alsologtostderr -v=1: (28.690359118s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210810222001-345780
helpers_test.go:236: (dbg) docker inspect addons-20210810222001-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd",
	        "Created": "2021-08-10T22:20:11.462871421Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347341,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-10T22:20:11.949250423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd/hosts",
	        "LogPath": "/var/lib/docker/containers/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd-json.log",
	        "Name": "/addons-20210810222001-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210810222001-345780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210810222001-345780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4265978ecf45e22c27fd2be473889a3d2bd3d2933e1be9a453673a14fd7af41b-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4265978ecf45e22c27fd2be473889a3d2bd3d2933e1be9a453673a14fd7af41b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4265978ecf45e22c27fd2be473889a3d2bd3d2933e1be9a453673a14fd7af41b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4265978ecf45e22c27fd2be473889a3d2bd3d2933e1be9a453673a14fd7af41b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210810222001-345780",
	                "Source": "/var/lib/docker/volumes/addons-20210810222001-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210810222001-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210810222001-345780",
	                "name.minikube.sigs.k8s.io": "addons-20210810222001-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec3cc1e55aff905f89e24fa225622051ed60170aa7acefab39272bd5e29fa606",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33012"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33011"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33008"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33010"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33009"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ec3cc1e55aff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210810222001-345780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "88caeb25c14e"
	                    ],
	                    "NetworkID": "b8a32c725a61a793069177d54c1dee78dbdca4075a5b827c2d0eb3c37bb414f3",
	                    "EndpointID": "f7d6003ad00679aebef2b20564f30d920426277813a02c71306ced28004d3f7c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210810222001-345780 -n addons-20210810222001-345780
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810222001-345780 logs -n 25
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                  |                Profile                |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                 | download-only-20210810221930-345780   | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:19:48 UTC | Tue, 10 Aug 2021 22:19:48 UTC |
	| delete  | -p                                    | download-only-20210810221930-345780   | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:19:48 UTC | Tue, 10 Aug 2021 22:19:48 UTC |
	|         | download-only-20210810221930-345780   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-only-20210810221930-345780   | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:19:48 UTC | Tue, 10 Aug 2021 22:19:48 UTC |
	|         | download-only-20210810221930-345780   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-docker-20210810221948-345780 | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:20:01 UTC | Tue, 10 Aug 2021 22:20:01 UTC |
	|         | download-docker-20210810221948-345780 |                                       |         |         |                               |                               |
	| start   | -p                                    | addons-20210810222001-345780          | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:20:01 UTC | Tue, 10 Aug 2021 22:22:36 UTC |
	|         | addons-20210810222001-345780          |                                       |         |         |                               |                               |
	|         | --wait=true --memory=4000             |                                       |         |         |                               |                               |
	|         | --alsologtostderr                     |                                       |         |         |                               |                               |
	|         | --addons=registry                     |                                       |         |         |                               |                               |
	|         | --addons=metrics-server               |                                       |         |         |                               |                               |
	|         | --addons=olm                          |                                       |         |         |                               |                               |
	|         | --addons=volumesnapshots              |                                       |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver          |                                       |         |         |                               |                               |
	|         | --driver=docker                       |                                       |         |         |                               |                               |
	|         | --container-runtime=crio              |                                       |         |         |                               |                               |
	|         | --addons=ingress                      |                                       |         |         |                               |                               |
	|         | --addons=helm-tiller                  |                                       |         |         |                               |                               |
	| -p      | addons-20210810222001-345780          | addons-20210810222001-345780          | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:22:49 UTC | Tue, 10 Aug 2021 22:22:59 UTC |
	|         | addons enable gcp-auth --force        |                                       |         |         |                               |                               |
	| -p      | addons-20210810222001-345780          | addons-20210810222001-345780          | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:23:12 UTC | Tue, 10 Aug 2021 22:23:12 UTC |
	|         | addons disable helm-tiller            |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210810222001-345780          | addons-20210810222001-345780          | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:23:17 UTC | Tue, 10 Aug 2021 22:23:18 UTC |
	|         | addons disable metrics-server         |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210810222001-345780          | addons-20210810222001-345780          | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:23:20 UTC | Tue, 10 Aug 2021 22:23:20 UTC |
	|         | ip                                    |                                       |         |         |                               |                               |
	| -p      | addons-20210810222001-345780          | addons-20210810222001-345780          | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:23:20 UTC | Tue, 10 Aug 2021 22:23:20 UTC |
	|         | addons disable registry               |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210810222001-345780          | addons-20210810222001-345780          | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:23:36 UTC | Tue, 10 Aug 2021 22:23:43 UTC |
	|         | addons disable gcp-auth               |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210810222001-345780          | addons-20210810222001-345780          | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:23:36 UTC | Tue, 10 Aug 2021 22:23:43 UTC |
	|         | addons disable                        |                                       |         |         |                               |                               |
	|         | csi-hostpath-driver                   |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210810222001-345780          | addons-20210810222001-345780          | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:23:43 UTC | Tue, 10 Aug 2021 22:23:44 UTC |
	|         | addons disable volumesnapshots        |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210810222001-345780          | addons-20210810222001-345780          | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:27:55 UTC | Tue, 10 Aug 2021 22:28:24 UTC |
	|         | addons disable ingress                |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:20:01
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:20:01.706890  346703 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:20:01.706971  346703 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:20:01.706975  346703 out.go:311] Setting ErrFile to fd 2...
	I0810 22:20:01.706978  346703 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:20:01.707077  346703 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:20:01.707376  346703 out.go:305] Setting JSON to false
	I0810 22:20:01.743444  346703 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":7363,"bootTime":1628626639,"procs":182,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:20:01.743574  346703 start.go:121] virtualization: kvm guest
	I0810 22:20:01.746307  346703 out.go:177] * [addons-20210810222001-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:20:01.748067  346703 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:20:01.746505  346703 notify.go:169] Checking for updates...
	I0810 22:20:01.749832  346703 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:20:01.751572  346703 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:20:01.752987  346703 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:20:01.753187  346703 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:20:01.798652  346703 docker.go:132] docker version: linux-19.03.15
	I0810 22:20:01.798772  346703 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:20:01.878562  346703 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-10 22:20:01.832742064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:20:01.878664  346703 docker.go:244] overlay module found
	I0810 22:20:01.880887  346703 out.go:177] * Using the docker driver based on user configuration
	I0810 22:20:01.880942  346703 start.go:278] selected driver: docker
	I0810 22:20:01.880971  346703 start.go:751] validating driver "docker" against <nil>
	I0810 22:20:01.880998  346703 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0810 22:20:01.881045  346703 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:20:01.881069  346703 out.go:242] ! Your cgroup does not allow setting memory.
	I0810 22:20:01.882685  346703 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:20:01.883576  346703 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:20:01.962501  346703 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-10 22:20:01.916769303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:20:01.962646  346703 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0810 22:20:01.962797  346703 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0810 22:20:01.962821  346703 cni.go:93] Creating CNI manager for ""
	I0810 22:20:01.962827  346703 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:20:01.962833  346703 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0810 22:20:01.962842  346703 start_flags.go:277] config:
	{Name:addons-20210810222001-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210810222001-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:20:01.965470  346703 out.go:177] * Starting control plane node addons-20210810222001-345780 in cluster addons-20210810222001-345780
	I0810 22:20:01.965525  346703 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:20:01.967115  346703 out.go:177] * Pulling base image ...
	I0810 22:20:01.967146  346703 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:20:01.967179  346703 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 22:20:01.967199  346703 cache.go:56] Caching tarball of preloaded images
	I0810 22:20:01.967261  346703 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0810 22:20:01.967350  346703 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 22:20:01.967374  346703 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0810 22:20:01.967692  346703 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/config.json ...
	I0810 22:20:01.967722  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/config.json: {Name:mkd09b06bfe60a94adf5d044ac878bc2b47be613 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:02.055167  346703 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:20:02.055203  346703 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:20:02.055221  346703 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:20:02.055270  346703 start.go:313] acquiring machines lock for addons-20210810222001-345780: {Name:mk9a402cd9db2315e1cf7bf5e17603512e2b9651 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:20:02.055408  346703 start.go:317] acquired machines lock for "addons-20210810222001-345780" in 118.62µs
	I0810 22:20:02.055437  346703 start.go:89] Provisioning new machine with config: &{Name:addons-20210810222001-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210810222001-345780 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:20:02.055515  346703 start.go:126] createHost starting for "" (driver="docker")
	I0810 22:20:02.058100  346703 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0810 22:20:02.058399  346703 start.go:160] libmachine.API.Create for "addons-20210810222001-345780" (driver="docker")
	I0810 22:20:02.058432  346703 client.go:168] LocalClient.Create starting
	I0810 22:20:02.058556  346703 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 22:20:02.336080  346703 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 22:20:02.448708  346703 cli_runner.go:115] Run: docker network inspect addons-20210810222001-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0810 22:20:02.484566  346703 cli_runner.go:162] docker network inspect addons-20210810222001-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0810 22:20:02.484653  346703 network_create.go:255] running [docker network inspect addons-20210810222001-345780] to gather additional debugging logs...
	I0810 22:20:02.484678  346703 cli_runner.go:115] Run: docker network inspect addons-20210810222001-345780
	W0810 22:20:02.519494  346703 cli_runner.go:162] docker network inspect addons-20210810222001-345780 returned with exit code 1
	I0810 22:20:02.519530  346703 network_create.go:258] error running [docker network inspect addons-20210810222001-345780]: docker network inspect addons-20210810222001-345780: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210810222001-345780
	I0810 22:20:02.519552  346703 network_create.go:260] output of [docker network inspect addons-20210810222001-345780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210810222001-345780
	
	** /stderr **
	I0810 22:20:02.519634  346703 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 22:20:02.555235  346703 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00099a050] misses:0}
	I0810 22:20:02.555287  346703 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0810 22:20:02.555313  346703 network_create.go:106] attempt to create docker network addons-20210810222001-345780 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0810 22:20:02.555371  346703 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210810222001-345780
	I0810 22:20:02.627313  346703 network_create.go:90] docker network addons-20210810222001-345780 192.168.49.0/24 created
	I0810 22:20:02.627354  346703 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210810222001-345780" container
	I0810 22:20:02.627421  346703 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0810 22:20:02.663991  346703 cli_runner.go:115] Run: docker volume create addons-20210810222001-345780 --label name.minikube.sigs.k8s.io=addons-20210810222001-345780 --label created_by.minikube.sigs.k8s.io=true
	I0810 22:20:02.702547  346703 oci.go:102] Successfully created a docker volume addons-20210810222001-345780
	I0810 22:20:02.702658  346703 cli_runner.go:115] Run: docker run --rm --name addons-20210810222001-345780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210810222001-345780 --entrypoint /usr/bin/test -v addons-20210810222001-345780:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0810 22:20:11.344000  346703 cli_runner.go:168] Completed: docker run --rm --name addons-20210810222001-345780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210810222001-345780 --entrypoint /usr/bin/test -v addons-20210810222001-345780:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (8.64127288s)
	I0810 22:20:11.344038  346703 oci.go:106] Successfully prepared a docker volume addons-20210810222001-345780
	W0810 22:20:11.344072  346703 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0810 22:20:11.344080  346703 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0810 22:20:11.344142  346703 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0810 22:20:11.344160  346703 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:20:11.344191  346703 kic.go:179] Starting extracting preloaded images to volume ...
	I0810 22:20:11.344263  346703 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210810222001-345780:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0810 22:20:11.423009  346703 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210810222001-345780 --name addons-20210810222001-345780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210810222001-345780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210810222001-345780 --network addons-20210810222001-345780 --ip 192.168.49.2 --volume addons-20210810222001-345780:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0810 22:20:11.957774  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Running}}
	I0810 22:20:11.999883  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:12.045969  346703 cli_runner.go:115] Run: docker exec addons-20210810222001-345780 stat /var/lib/dpkg/alternatives/iptables
	I0810 22:20:12.183216  346703 oci.go:278] the created container "addons-20210810222001-345780" has a running status.
	I0810 22:20:12.183334  346703 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa...
	I0810 22:20:12.284395  346703 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0810 22:20:12.687302  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:12.735017  346703 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0810 22:20:12.735047  346703 kic_runner.go:115] Args: [docker exec --privileged addons-20210810222001-345780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0810 22:20:14.982906  346703 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210810222001-345780:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (3.638570834s)
	I0810 22:20:14.982950  346703 kic.go:188] duration metric: took 3.638754 seconds to extract preloaded images to volume
	I0810 22:20:14.983040  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:15.021121  346703 machine.go:88] provisioning docker machine ...
	I0810 22:20:15.021173  346703 ubuntu.go:169] provisioning hostname "addons-20210810222001-345780"
	I0810 22:20:15.021230  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:15.058047  346703 main.go:130] libmachine: Using SSH client type: native
	I0810 22:20:15.058260  346703 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33012 <nil> <nil>}
	I0810 22:20:15.058278  346703 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210810222001-345780 && echo "addons-20210810222001-345780" | sudo tee /etc/hostname
	I0810 22:20:15.245245  346703 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210810222001-345780
	
	I0810 22:20:15.245355  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:15.282433  346703 main.go:130] libmachine: Using SSH client type: native
	I0810 22:20:15.282599  346703 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33012 <nil> <nil>}
	I0810 22:20:15.282621  346703 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210810222001-345780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210810222001-345780/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210810222001-345780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:20:15.392717  346703 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:20:15.392756  346703 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:20:15.392787  346703 ubuntu.go:177] setting up certificates
	I0810 22:20:15.392808  346703 provision.go:83] configureAuth start
	I0810 22:20:15.392863  346703 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210810222001-345780
	I0810 22:20:15.429259  346703 provision.go:137] copyHostCerts
	I0810 22:20:15.429330  346703 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:20:15.429423  346703 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:20:15.429476  346703 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:20:15.429518  346703 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.addons-20210810222001-345780 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210810222001-345780]
	I0810 22:20:15.503490  346703 provision.go:171] copyRemoteCerts
	I0810 22:20:15.503551  346703 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:20:15.503602  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:15.540957  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:15.624230  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:20:15.640750  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0810 22:20:15.656216  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0810 22:20:15.672162  346703 provision.go:86] duration metric: configureAuth took 279.339502ms
	I0810 22:20:15.672190  346703 ubuntu.go:193] setting minikube options for container-runtime
	I0810 22:20:15.672449  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:15.709833  346703 main.go:130] libmachine: Using SSH client type: native
	I0810 22:20:15.709990  346703 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33012 <nil> <nil>}
	I0810 22:20:15.710008  346703 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:20:16.053841  346703 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:20:16.053871  346703 machine.go:91] provisioned docker machine in 1.032724693s
	I0810 22:20:16.053885  346703 client.go:171] LocalClient.Create took 13.995446386s
	I0810 22:20:16.053904  346703 start.go:168] duration metric: libmachine.API.Create for "addons-20210810222001-345780" took 13.99550407s
	I0810 22:20:16.053921  346703 start.go:267] post-start starting for "addons-20210810222001-345780" (driver="docker")
	I0810 22:20:16.053928  346703 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:20:16.054008  346703 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:20:16.054065  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:16.091149  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:16.176591  346703 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:20:16.179361  346703 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0810 22:20:16.179386  346703 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0810 22:20:16.179396  346703 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0810 22:20:16.179403  346703 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0810 22:20:16.179417  346703 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:20:16.179499  346703 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:20:16.179542  346703 start.go:270] post-start completed in 125.60661ms
	I0810 22:20:16.179893  346703 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210810222001-345780
	I0810 22:20:16.215749  346703 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/config.json ...
	I0810 22:20:16.216009  346703 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:20:16.216064  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:16.251633  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:16.333988  346703 start.go:129] duration metric: createHost completed in 14.278458659s
	I0810 22:20:16.334016  346703 start.go:80] releasing machines lock for "addons-20210810222001-345780", held for 14.278596279s
	I0810 22:20:16.334101  346703 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210810222001-345780
	I0810 22:20:16.371401  346703 ssh_runner.go:149] Run: systemctl --version
	I0810 22:20:16.371459  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:16.371459  346703 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:20:16.371528  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:16.411021  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:16.411214  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:16.531877  346703 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:20:16.550225  346703 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:20:16.558692  346703 docker.go:153] disabling docker service ...
	I0810 22:20:16.558742  346703 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:20:16.570050  346703 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:20:16.578942  346703 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:20:16.643478  346703 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:20:16.708485  346703 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:20:16.717509  346703 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:20:16.730921  346703 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0810 22:20:16.738827  346703 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0810 22:20:16.738867  346703 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0810 22:20:16.747167  346703 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:20:16.753773  346703 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:20:16.753829  346703 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:20:16.760608  346703 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:20:16.766582  346703 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:20:16.823576  346703 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:20:16.832664  346703 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:20:16.832733  346703 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:20:16.835778  346703 start.go:417] Will wait 60s for crictl version
	I0810 22:20:16.835825  346703 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:20:16.863368  346703 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0810 22:20:16.863440  346703 ssh_runner.go:149] Run: crio --version
	I0810 22:20:16.922684  346703 ssh_runner.go:149] Run: crio --version
	I0810 22:20:16.984625  346703 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0810 22:20:16.984723  346703 cli_runner.go:115] Run: docker network inspect addons-20210810222001-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 22:20:17.022627  346703 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0810 22:20:17.026190  346703 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:20:17.035528  346703 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:20:17.035586  346703 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:20:17.079006  346703 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:20:17.079030  346703 crio.go:333] Images already preloaded, skipping extraction
	I0810 22:20:17.079079  346703 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:20:17.102278  346703 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:20:17.102302  346703 cache_images.go:74] Images are preloaded, skipping loading
	I0810 22:20:17.102362  346703 ssh_runner.go:149] Run: crio config
	I0810 22:20:17.168226  346703 cni.go:93] Creating CNI manager for ""
	I0810 22:20:17.168249  346703 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:20:17.168265  346703 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:20:17.168327  346703 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210810222001-345780 NodeName:addons-20210810222001-345780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:20:17.168490  346703 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "addons-20210810222001-345780"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:20:17.168648  346703 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-20210810222001-345780 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210810222001-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:20:17.168717  346703 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0810 22:20:17.176060  346703 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 22:20:17.176138  346703 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0810 22:20:17.183255  346703 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0810 22:20:17.196085  346703 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:20:17.208793  346703 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0810 22:20:17.222017  346703 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0810 22:20:17.225065  346703 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:20:17.234498  346703 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780 for IP: 192.168.49.2
	I0810 22:20:17.234553  346703 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:20:17.374972  346703 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt ...
	I0810 22:20:17.375013  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt: {Name:mk390500506586dfa91675e27b72d021999d28fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:17.375261  346703 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key ...
	I0810 22:20:17.375283  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key: {Name:mkcd486b2846c64016abedd0abf4ec836061335b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:17.375400  346703 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:20:17.463051  346703 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt ...
	I0810 22:20:17.463087  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt: {Name:mk4135bad7ad5e2b1b0d858f9bf69b91b922846a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:17.463310  346703 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key ...
	I0810 22:20:17.463329  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key: {Name:mkc6e1ec3aff0b6f6c35d30d05a51bed2ef89760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:17.463499  346703 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.key
	I0810 22:20:17.463514  346703 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt with IP's: []
	I0810 22:20:17.615084  346703 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt ...
	I0810 22:20:17.615120  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: {Name:mke46f70b8adf32b51bc7ad792757521992ff39d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:17.615343  346703 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.key ...
	I0810 22:20:17.615361  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.key: {Name:mk5383b8e2538b8e964df1f13b7c47a5dcd486b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:17.615479  346703 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.key.dd3b5fb2
	I0810 22:20:17.615493  346703 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0810 22:20:17.777450  346703 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.crt.dd3b5fb2 ...
	I0810 22:20:17.777491  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.crt.dd3b5fb2: {Name:mk2c9ec8a140a2e2fb4b84a5c0565c49a130f252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:17.777725  346703 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.key.dd3b5fb2 ...
	I0810 22:20:17.777745  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.key.dd3b5fb2: {Name:mkedf9f8e4b5eb4670735dced050d2d5c0afb530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:17.777856  346703 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.crt
	I0810 22:20:17.777934  346703 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.key
	I0810 22:20:17.777992  346703 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/proxy-client.key
	I0810 22:20:17.778001  346703 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/proxy-client.crt with IP's: []
	I0810 22:20:17.846422  346703 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/proxy-client.crt ...
	I0810 22:20:17.846459  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/proxy-client.crt: {Name:mk092173ee95be272fea854fc5de9d0f6e0cf046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:17.846650  346703 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/proxy-client.key ...
	I0810 22:20:17.846665  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/proxy-client.key: {Name:mk855039738623ab29f016095806f4f3927156b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:17.846828  346703 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0810 22:20:17.846864  346703 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:20:17.846894  346703 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:20:17.846917  346703 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:20:17.848078  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0810 22:20:17.867109  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0810 22:20:17.883532  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0810 22:20:17.899087  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0810 22:20:17.914545  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:20:17.930464  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:20:17.948372  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:20:17.965327  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0810 22:20:17.981875  346703 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:20:17.998382  346703 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0810 22:20:18.009656  346703 ssh_runner.go:149] Run: openssl version
	I0810 22:20:18.014286  346703 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:20:18.021124  346703 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:20:18.024197  346703 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:20:18.024254  346703 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:20:18.028817  346703 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:20:18.035590  346703 kubeadm.go:390] StartCluster: {Name:addons-20210810222001-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210810222001-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:20:18.035697  346703 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0810 22:20:18.035745  346703 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:20:18.058118  346703 cri.go:76] found id: ""
	I0810 22:20:18.058164  346703 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0810 22:20:18.064525  346703 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0810 22:20:18.070880  346703 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0810 22:20:18.070921  346703 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 22:20:18.077209  346703 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 22:20:18.077247  346703 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0810 22:20:18.352000  346703 out.go:204]   - Generating certificates and keys ...
	I0810 22:20:20.702746  346703 out.go:204]   - Booting up control plane ...
	I0810 22:20:34.751653  346703 out.go:204]   - Configuring RBAC rules ...
	I0810 22:20:35.181316  346703 cni.go:93] Creating CNI manager for ""
	I0810 22:20:35.181344  346703 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:20:35.182935  346703 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0810 22:20:35.183012  346703 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0810 22:20:35.186618  346703 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 22:20:35.186636  346703 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0810 22:20:35.199631  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0810 22:20:35.567501  346703 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0810 22:20:35.567582  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=addons-20210810222001-345780 minikube.k8s.io/updated_at=2021_08_10T22_20_35_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:35.567587  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:35.672693  346703 ops.go:34] apiserver oom_adj: -16
	I0810 22:20:35.672801  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:36.239901  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:36.739913  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:37.239416  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:37.739566  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:38.239487  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:38.740096  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:39.239293  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:39.740292  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:40.239295  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:40.739922  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:41.239528  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:41.739991  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:42.739862  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:43.239756  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:43.739606  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:44.239598  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:44.740049  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:45.239505  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:45.740002  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:46.240137  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:46.740057  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:49.428021  346703 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.295988674s)
	I0810 22:20:49.739387  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:51.892060  346703 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.15262567s)
	I0810 22:20:52.239350  346703 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:20:52.311658  346703 kubeadm.go:985] duration metric: took 16.744145298s to wait for elevateKubeSystemPrivileges.
	I0810 22:20:52.311693  346703 kubeadm.go:392] StartCluster complete in 34.276115098s
	I0810 22:20:52.311715  346703 settings.go:142] acquiring lock: {Name:mka213f92e424859b3fea9ed3e06c1529c3d79c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:52.311852  346703 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:20:52.312378  346703 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mk4b0a8134f819d1f0c4fc03757f6964ae0e24de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:20:52.830190  346703 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210810222001-345780" rescaled to 1
	I0810 22:20:52.830269  346703 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:20:52.833143  346703 out.go:177] * Verifying Kubernetes components...
	I0810 22:20:52.830338  346703 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0810 22:20:52.833261  346703 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:20:52.830658  346703 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
	I0810 22:20:52.833357  346703 addons.go:59] Setting volumesnapshots=true in profile "addons-20210810222001-345780"
	I0810 22:20:52.833367  346703 addons.go:59] Setting registry=true in profile "addons-20210810222001-345780"
	I0810 22:20:52.833376  346703 addons.go:135] Setting addon volumesnapshots=true in "addons-20210810222001-345780"
	I0810 22:20:52.833383  346703 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210810222001-345780"
	I0810 22:20:52.833367  346703 addons.go:59] Setting helm-tiller=true in profile "addons-20210810222001-345780"
	I0810 22:20:52.833395  346703 addons.go:59] Setting default-storageclass=true in profile "addons-20210810222001-345780"
	I0810 22:20:52.833403  346703 addons.go:59] Setting ingress=true in profile "addons-20210810222001-345780"
	I0810 22:20:52.833411  346703 addons.go:59] Setting storage-provisioner=true in profile "addons-20210810222001-345780"
	I0810 22:20:52.833412  346703 host.go:66] Checking if "addons-20210810222001-345780" exists ...
	I0810 22:20:52.833424  346703 addons.go:135] Setting addon ingress=true in "addons-20210810222001-345780"
	I0810 22:20:52.833431  346703 addons.go:135] Setting addon storage-provisioner=true in "addons-20210810222001-345780"
	W0810 22:20:52.833440  346703 addons.go:147] addon storage-provisioner should already be in state true
	I0810 22:20:52.833452  346703 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210810222001-345780"
	I0810 22:20:52.833451  346703 addons.go:59] Setting metrics-server=true in profile "addons-20210810222001-345780"
	I0810 22:20:52.833459  346703 host.go:66] Checking if "addons-20210810222001-345780" exists ...
	I0810 22:20:52.833467  346703 host.go:66] Checking if "addons-20210810222001-345780" exists ...
	I0810 22:20:52.833472  346703 addons.go:135] Setting addon metrics-server=true in "addons-20210810222001-345780"
	I0810 22:20:52.833380  346703 addons.go:59] Setting olm=true in profile "addons-20210810222001-345780"
	I0810 22:20:52.833401  346703 addons.go:135] Setting addon helm-tiller=true in "addons-20210810222001-345780"
	I0810 22:20:52.833508  346703 host.go:66] Checking if "addons-20210810222001-345780" exists ...
	I0810 22:20:52.833509  346703 host.go:66] Checking if "addons-20210810222001-345780" exists ...
	I0810 22:20:52.833509  346703 addons.go:135] Setting addon olm=true in "addons-20210810222001-345780"
	I0810 22:20:52.833525  346703 host.go:66] Checking if "addons-20210810222001-345780" exists ...
	I0810 22:20:52.833551  346703 host.go:66] Checking if "addons-20210810222001-345780" exists ...
	I0810 22:20:52.834041  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:52.834041  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:52.834048  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:52.834052  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:52.833414  346703 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210810222001-345780"
	I0810 22:20:52.833387  346703 addons.go:135] Setting addon registry=true in "addons-20210810222001-345780"
	I0810 22:20:52.834042  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:52.834183  346703 host.go:66] Checking if "addons-20210810222001-345780" exists ...
	I0810 22:20:52.834042  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:52.834400  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:52.834522  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:52.834620  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:52.868214  346703 node_ready.go:35] waiting up to 6m0s for node "addons-20210810222001-345780" to be "Ready" ...
	I0810 22:20:52.874770  346703 node_ready.go:49] node "addons-20210810222001-345780" has status "Ready":"True"
	I0810 22:20:52.874799  346703 node_ready.go:38] duration metric: took 6.54229ms waiting for node "addons-20210810222001-345780" to be "Ready" ...
	I0810 22:20:52.874812  346703 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:20:52.921562  346703 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0810 22:20:52.921638  346703 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0810 22:20:52.921651  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0810 22:20:52.921712  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:52.922158  346703 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace to be "Ready" ...
	I0810 22:20:52.929756  346703 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0810 22:20:52.929820  346703 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0810 22:20:52.929830  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0810 22:20:52.929886  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:52.944785  346703 out.go:177]   - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	I0810 22:20:52.948169  346703 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0810 22:20:52.944914  346703 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0810 22:20:52.949899  346703 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0810 22:20:52.949902  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
	I0810 22:20:52.951689  346703 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0810 22:20:52.950081  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:52.953578  346703 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0810 22:20:52.955187  346703 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0810 22:20:52.956749  346703 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0810 22:20:52.958446  346703 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0810 22:20:52.960242  346703 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0810 22:20:52.961901  346703 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0810 22:20:52.961978  346703 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0810 22:20:52.961994  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0810 22:20:52.962068  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:52.971122  346703 out.go:177]   - Using image registry:2.7.1
	I0810 22:20:52.972958  346703 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0810 22:20:52.973084  346703 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0810 22:20:52.973097  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0810 22:20:52.973163  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:52.976844  346703 addons.go:135] Setting addon default-storageclass=true in "addons-20210810222001-345780"
	W0810 22:20:52.976873  346703 addons.go:147] addon default-storageclass should already be in state true
	I0810 22:20:52.976908  346703 host.go:66] Checking if "addons-20210810222001-345780" exists ...
	I0810 22:20:52.977478  346703 cli_runner.go:115] Run: docker container inspect addons-20210810222001-345780 --format={{.State.Status}}
	I0810 22:20:52.983225  346703 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0810 22:20:52.985499  346703 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:20:52.985652  346703 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:20:52.985667  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0810 22:20:52.985734  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:52.985471  346703 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0810 22:20:53.012264  346703 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0810 22:20:53.011630  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:53.014093  346703 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0810 22:20:53.015999  346703 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0810 22:20:53.015765  346703 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0810 22:20:53.016057  346703 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0810 22:20:53.016068  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0810 22:20:53.016126  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:53.021010  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:53.026683  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:53.036346  346703 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0810 22:20:53.036390  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0810 22:20:53.036458  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:53.061377  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:53.084452  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:53.100977  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:53.102944  346703 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0810 22:20:53.102976  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0810 22:20:53.103035  346703 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210810222001-345780
	I0810 22:20:53.110671  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:53.111597  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:53.144386  346703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810222001-345780/id_rsa Username:docker}
	I0810 22:20:53.257893  346703 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0810 22:20:53.257921  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0810 22:20:53.272335  346703 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0810 22:20:53.272371  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0810 22:20:53.281447  346703 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0810 22:20:53.281475  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0810 22:20:53.359073  346703 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0810 22:20:53.359098  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0810 22:20:53.364347  346703 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0810 22:20:53.364372  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0810 22:20:53.367657  346703 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0810 22:20:53.367680  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0810 22:20:53.378935  346703 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0810 22:20:53.378963  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0810 22:20:53.380341  346703 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0810 22:20:53.380364  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0810 22:20:53.458362  346703 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0810 22:20:53.458403  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0810 22:20:53.459508  346703 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0810 22:20:53.460850  346703 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:20:53.470223  346703 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0810 22:20:53.470320  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0810 22:20:53.474643  346703 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0810 22:20:53.481889  346703 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0810 22:20:53.481926  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0810 22:20:53.563377  346703 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0810 22:20:53.563409  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0810 22:20:53.565762  346703 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0810 22:20:53.565790  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0810 22:20:53.567151  346703 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0810 22:20:53.567185  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0810 22:20:53.569152  346703 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0810 22:20:53.569173  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0810 22:20:53.572878  346703 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0810 22:20:53.660478  346703 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0810 22:20:53.660517  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0810 22:20:53.672465  346703 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0810 22:20:53.672500  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0810 22:20:53.680516  346703 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0810 22:20:53.758585  346703 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0810 22:20:53.758665  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0810 22:20:53.759170  346703 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0810 22:20:53.778558  346703 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0810 22:20:53.778611  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0810 22:20:53.784861  346703 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0810 22:20:53.784889  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0810 22:20:53.875876  346703 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0810 22:20:53.970942  346703 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0810 22:20:53.970969  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0810 22:20:54.057719  346703 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0810 22:20:54.080405  346703 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0810 22:20:54.080494  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0810 22:20:54.272908  346703 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.256858714s)
	I0810 22:20:54.272960  346703 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0810 22:20:54.377023  346703 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0810 22:20:54.377141  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0810 22:20:54.664086  346703 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0810 22:20:54.664120  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0810 22:20:54.978404  346703 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0810 22:20:54.978442  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0810 22:20:55.063430  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:20:55.161430  346703 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.701876771s)
	I0810 22:20:55.263557  346703 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0810 22:20:55.263586  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0810 22:20:55.459285  346703 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0810 22:20:55.459384  346703 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0810 22:20:55.667985  346703 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0810 22:20:55.981093  346703 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.520201057s)
	I0810 22:20:55.981190  346703 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (2.506462041s)
	I0810 22:20:56.281371  346703 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.708443687s)
	I0810 22:20:56.281423  346703 addons.go:313] Verifying addon metrics-server=true in "addons-20210810222001-345780"
	I0810 22:20:57.072330  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:20:57.479093  346703 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (3.798523005s)
	I0810 22:20:57.479141  346703 addons.go:313] Verifying addon ingress=true in "addons-20210810222001-345780"
	I0810 22:20:57.483982  346703 out.go:177] * Verifying ingress addon...
	I0810 22:20:57.485727  346703 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0810 22:20:57.586622  346703 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0810 22:20:57.586649  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:58.169427  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:58.665522  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:59.078164  346703 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (5.318958102s)
	W0810 22:20:59.078224  346703 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0810 22:20:59.078248  346703 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0810 22:20:59.078332  346703 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.202356263s)
	I0810 22:20:59.078358  346703 addons.go:313] Verifying addon registry=true in "addons-20210810222001-345780"
	I0810 22:20:59.079837  346703 out.go:177] * Verifying registry addon...
	I0810 22:20:59.078809  346703 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.020989112s)
	W0810 22:20:59.080031  346703 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0810 22:20:59.080077  346703 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0810 22:20:59.082587  346703 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0810 22:20:59.164904  346703 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0810 22:20:59.164958  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:20:59.168830  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:59.355167  346703 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0810 22:20:59.440718  346703 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0810 22:20:59.474194  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:20:59.676766  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:59.767836  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:00.175495  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:00.176150  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:00.664066  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:00.666450  346703 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.998341664s)
	I0810 22:21:00.666487  346703 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210810222001-345780"
	I0810 22:21:00.669121  346703 out.go:177] * Verifying csi-hostpath-driver addon...
	I0810 22:21:00.671604  346703 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0810 22:21:00.680611  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:00.685476  346703 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0810 22:21:00.685501  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:01.176640  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:01.177814  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:01.264096  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:01.590811  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:01.669637  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:01.691417  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:01.962868  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:02.171728  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:02.171841  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:02.373438  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:02.477286  346703 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (3.122067254s)
	I0810 22:21:02.477375  346703 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.036611668s)
	I0810 22:21:02.590936  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:02.669079  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:02.689759  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:03.090020  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:03.169486  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:03.190824  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:03.591077  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:03.669286  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:03.689737  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:04.090660  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:04.169410  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:04.190721  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:04.436201  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:04.590721  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:04.669588  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:04.689938  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:05.091012  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:05.169518  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:05.191377  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:05.591192  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:05.669845  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:05.689564  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:06.092965  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:06.169388  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:06.190508  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:06.436276  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:06.590792  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:06.670144  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:06.690283  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:07.091094  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:07.169580  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:07.191179  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:07.590780  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:07.669266  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:07.690464  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:08.090877  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:08.169254  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:08.190320  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:08.590686  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:08.669568  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:08.690609  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:08.936203  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:09.090768  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:09.169443  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:09.191073  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:09.590435  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:09.670206  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:09.690746  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:10.090750  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:10.169411  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:10.190830  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:10.590394  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:10.670381  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:10.693157  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:10.937033  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:11.090516  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:11.169712  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:11.190675  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:11.590932  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:11.670263  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:11.690592  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:12.091070  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:12.169661  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:12.192125  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:12.590835  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:12.669176  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:12.689932  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:13.092220  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:13.169749  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:13.191192  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:13.436095  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:13.590507  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:13.669682  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:13.690818  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:14.091123  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:14.169688  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:14.191255  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:14.591150  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:14.670171  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:14.690455  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:15.090705  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:15.169333  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:15.192505  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:15.436218  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:15.590763  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:15.669269  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:15.690813  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:16.091109  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:16.169777  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:16.193290  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:16.590383  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:16.669572  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:16.690881  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:17.090829  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:17.169589  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:17.191187  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:17.437045  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:17.590620  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:17.669273  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:17.690702  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:18.090954  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:18.169476  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:18.190325  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:18.590342  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:18.670191  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:18.690107  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:19.093200  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:19.169547  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:19.190713  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:19.591352  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:19.671778  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:19.690331  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:19.936215  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:20.090735  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:20.169148  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:20.189993  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:20.590974  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:20.669710  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:20.690430  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:21.090769  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:21.169374  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:21.191355  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:21.591373  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:21.671214  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:21.690644  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:22.090427  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:22.169837  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:22.191265  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:22.458688  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:22.591214  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:22.670051  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:22.690653  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:23.091140  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:23.170041  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:23.191177  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:23.591036  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:23.669405  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:23.690172  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:24.090365  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:24.170010  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:24.190719  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:24.591029  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:24.669478  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:24.690703  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:24.938792  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:25.091407  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:25.169932  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:25.190535  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:25.591017  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:25.669537  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:25.691403  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:26.093373  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:26.169182  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:26.191431  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:26.591394  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:26.669554  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:26.691625  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:27.091464  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:27.169920  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:27.190408  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:27.436313  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:27.662860  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:27.669283  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:27.763247  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:28.160891  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:28.169548  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:28.259532  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:28.660082  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:28.670549  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:28.760859  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:29.091782  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:29.170103  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:29.192226  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:29.469368  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:29.590934  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:29.670085  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:29.694078  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:30.091223  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:30.170366  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:30.191610  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:30.590544  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:30.669230  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:30.690883  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:31.091166  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:31.169747  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:31.191377  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:31.590396  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:31.670475  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:31.690901  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:31.937280  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:32.091378  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:32.169611  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:32.192064  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:32.590454  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:32.669190  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:32.691131  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:33.090834  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:33.169133  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:33.190676  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:33.590912  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:33.674078  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:33.690523  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:34.090530  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:34.168516  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:34.190832  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:34.436323  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:34.591211  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:34.669531  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:34.690860  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:35.091583  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:35.171185  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:35.191561  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:35.590880  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:35.669250  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:35.691472  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:36.090321  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:36.169672  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:36.190055  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:36.464711  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:36.591376  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:36.669050  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:36.691144  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:37.163240  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:37.169030  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:37.262902  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:37.590800  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:37.669561  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:37.764622  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:38.090722  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:38.169303  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:38.191970  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:38.590821  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:38.669478  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:38.691837  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:38.962749  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:39.090136  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:39.169927  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:39.191272  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:39.590790  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:39.670160  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:39.690638  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:40.167518  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:40.170376  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:40.191686  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:40.590777  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:40.670019  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:40.690615  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:40.963028  346703 pod_ready.go:102] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"False"
	I0810 22:21:41.161710  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:41.169937  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:41.190515  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:41.462564  346703 pod_ready.go:92] pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace has status "Ready":"True"
	I0810 22:21:41.462598  346703 pod_ready.go:81] duration metric: took 48.540409538s waiting for pod "coredns-558bd4d5db-brbjv" in "kube-system" namespace to be "Ready" ...
	I0810 22:21:41.462614  346703 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210810222001-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:21:41.466716  346703 pod_ready.go:92] pod "etcd-addons-20210810222001-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:21:41.466778  346703 pod_ready.go:81] duration metric: took 4.153568ms waiting for pod "etcd-addons-20210810222001-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:21:41.466803  346703 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210810222001-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:21:41.471129  346703 pod_ready.go:92] pod "kube-apiserver-addons-20210810222001-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:21:41.471150  346703 pod_ready.go:81] duration metric: took 4.337796ms waiting for pod "kube-apiserver-addons-20210810222001-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:21:41.471163  346703 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210810222001-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:21:41.475301  346703 pod_ready.go:92] pod "kube-controller-manager-addons-20210810222001-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:21:41.475320  346703 pod_ready.go:81] duration metric: took 4.148424ms waiting for pod "kube-controller-manager-addons-20210810222001-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:21:41.475337  346703 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qk8m9" in "kube-system" namespace to be "Ready" ...
	I0810 22:21:41.479656  346703 pod_ready.go:92] pod "kube-proxy-qk8m9" in "kube-system" namespace has status "Ready":"True"
	I0810 22:21:41.479675  346703 pod_ready.go:81] duration metric: took 4.329639ms waiting for pod "kube-proxy-qk8m9" in "kube-system" namespace to be "Ready" ...
	I0810 22:21:41.479686  346703 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210810222001-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:21:41.591392  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:41.669452  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:41.691083  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:41.834650  346703 pod_ready.go:92] pod "kube-scheduler-addons-20210810222001-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:21:41.834672  346703 pod_ready.go:81] duration metric: took 354.977738ms waiting for pod "kube-scheduler-addons-20210810222001-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:21:41.834680  346703 pod_ready.go:38] duration metric: took 48.959853697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:21:41.834700  346703 api_server.go:50] waiting for apiserver process to appear ...
	I0810 22:21:41.834739  346703 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:21:41.888222  346703 api_server.go:70] duration metric: took 49.057907618s to wait for apiserver process to appear ...
	I0810 22:21:41.888252  346703 api_server.go:86] waiting for apiserver healthz status ...
	I0810 22:21:41.888321  346703 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0810 22:21:41.893476  346703 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0810 22:21:41.894432  346703 api_server.go:139] control plane version: v1.21.3
	I0810 22:21:41.894461  346703 api_server.go:129] duration metric: took 6.200255ms to wait for apiserver health ...
	I0810 22:21:41.894473  346703 system_pods.go:43] waiting for kube-system pods to appear ...
	I0810 22:21:42.038158  346703 system_pods.go:59] 19 kube-system pods found
	I0810 22:21:42.038197  346703 system_pods.go:61] "coredns-558bd4d5db-brbjv" [2fb27c03-62c7-4a77-911d-01dc60a4f4fa] Running
	I0810 22:21:42.038204  346703 system_pods.go:61] "csi-hostpath-attacher-0" [70fbd7d4-354d-4fd7-894e-c5fadbd9d35e] Running
	I0810 22:21:42.038212  346703 system_pods.go:61] "csi-hostpath-provisioner-0" [6e2ac221-f04f-41ed-bea0-dced477682ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0810 22:21:42.038219  346703 system_pods.go:61] "csi-hostpath-resizer-0" [a1ddf4d2-8b72-4944-8a5b-6cd68ddd6173] Running
	I0810 22:21:42.038226  346703 system_pods.go:61] "csi-hostpath-snapshotter-0" [b21645d9-9c75-4eba-a5cb-90dff0dffc01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0810 22:21:42.038240  346703 system_pods.go:61] "csi-hostpathplugin-0" [21f38d15-91c9-4e5b-aec6-3deeb3cf12bb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0810 22:21:42.038245  346703 system_pods.go:61] "etcd-addons-20210810222001-345780" [2ed972f8-7fa7-420c-b8ce-cbcc4f3266d6] Running
	I0810 22:21:42.038252  346703 system_pods.go:61] "kindnet-89zpj" [a0b0738e-139a-4ce3-9910-13614b6c4153] Running
	I0810 22:21:42.038255  346703 system_pods.go:61] "kube-apiserver-addons-20210810222001-345780" [4d89bcab-d1e5-4468-9bad-e79a7fe52351] Running
	I0810 22:21:42.038262  346703 system_pods.go:61] "kube-controller-manager-addons-20210810222001-345780" [a915f00b-8119-413e-ab9b-e2ab3e5bb6c7] Running
	I0810 22:21:42.038265  346703 system_pods.go:61] "kube-proxy-qk8m9" [9d8d54cf-bfd0-413a-a4af-2eeaaaab22d3] Running
	I0810 22:21:42.038272  346703 system_pods.go:61] "kube-scheduler-addons-20210810222001-345780" [df6e3b26-3016-40d1-a138-7b1676310f38] Running
	I0810 22:21:42.038278  346703 system_pods.go:61] "metrics-server-77c99ccb96-87mh9" [19d2c0d3-1f9a-41b0-a39e-55b7b5644aec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0810 22:21:42.038285  346703 system_pods.go:61] "registry-42sw9" [ad871fd4-a4ea-463a-9d90-19741a4ffbcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0810 22:21:42.038296  346703 system_pods.go:61] "registry-proxy-jbgjs" [11981bc4-eb03-45e7-bc8c-b5a04d3ed1dd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0810 22:21:42.038306  346703 system_pods.go:61] "snapshot-controller-989f9ddc8-hk2r7" [1833572d-eb6d-45e7-8b28-551f8eeee620] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0810 22:21:42.038313  346703 system_pods.go:61] "snapshot-controller-989f9ddc8-srvnp" [1c02f364-7e32-44be-b4de-155380badbf8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0810 22:21:42.038320  346703 system_pods.go:61] "storage-provisioner" [0ce6de3a-b84b-4127-896d-f4765a56ccc4] Running
	I0810 22:21:42.038326  346703 system_pods.go:61] "tiller-deploy-768d69497-95j6c" [532cd5c7-b8b2-4b22-90de-521ff0e324e3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0810 22:21:42.038334  346703 system_pods.go:74] duration metric: took 143.854886ms to wait for pod list to return data ...
	I0810 22:21:42.038345  346703 default_sa.go:34] waiting for default service account to be created ...
	I0810 22:21:42.091342  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:42.169658  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:42.190921  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:42.234345  346703 default_sa.go:45] found service account: "default"
	I0810 22:21:42.234371  346703 default_sa.go:55] duration metric: took 196.017883ms for default service account to be created ...
	I0810 22:21:42.234380  346703 system_pods.go:116] waiting for k8s-apps to be running ...
	I0810 22:21:42.441004  346703 system_pods.go:86] 19 kube-system pods found
	I0810 22:21:42.441042  346703 system_pods.go:89] "coredns-558bd4d5db-brbjv" [2fb27c03-62c7-4a77-911d-01dc60a4f4fa] Running
	I0810 22:21:42.441051  346703 system_pods.go:89] "csi-hostpath-attacher-0" [70fbd7d4-354d-4fd7-894e-c5fadbd9d35e] Running
	I0810 22:21:42.441064  346703 system_pods.go:89] "csi-hostpath-provisioner-0" [6e2ac221-f04f-41ed-bea0-dced477682ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0810 22:21:42.441072  346703 system_pods.go:89] "csi-hostpath-resizer-0" [a1ddf4d2-8b72-4944-8a5b-6cd68ddd6173] Running
	I0810 22:21:42.441082  346703 system_pods.go:89] "csi-hostpath-snapshotter-0" [b21645d9-9c75-4eba-a5cb-90dff0dffc01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0810 22:21:42.441092  346703 system_pods.go:89] "csi-hostpathplugin-0" [21f38d15-91c9-4e5b-aec6-3deeb3cf12bb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0810 22:21:42.441103  346703 system_pods.go:89] "etcd-addons-20210810222001-345780" [2ed972f8-7fa7-420c-b8ce-cbcc4f3266d6] Running
	I0810 22:21:42.441115  346703 system_pods.go:89] "kindnet-89zpj" [a0b0738e-139a-4ce3-9910-13614b6c4153] Running
	I0810 22:21:42.441122  346703 system_pods.go:89] "kube-apiserver-addons-20210810222001-345780" [4d89bcab-d1e5-4468-9bad-e79a7fe52351] Running
	I0810 22:21:42.441131  346703 system_pods.go:89] "kube-controller-manager-addons-20210810222001-345780" [a915f00b-8119-413e-ab9b-e2ab3e5bb6c7] Running
	I0810 22:21:42.441141  346703 system_pods.go:89] "kube-proxy-qk8m9" [9d8d54cf-bfd0-413a-a4af-2eeaaaab22d3] Running
	I0810 22:21:42.441149  346703 system_pods.go:89] "kube-scheduler-addons-20210810222001-345780" [df6e3b26-3016-40d1-a138-7b1676310f38] Running
	I0810 22:21:42.441163  346703 system_pods.go:89] "metrics-server-77c99ccb96-87mh9" [19d2c0d3-1f9a-41b0-a39e-55b7b5644aec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0810 22:21:42.441172  346703 system_pods.go:89] "registry-42sw9" [ad871fd4-a4ea-463a-9d90-19741a4ffbcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0810 22:21:42.441187  346703 system_pods.go:89] "registry-proxy-jbgjs" [11981bc4-eb03-45e7-bc8c-b5a04d3ed1dd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0810 22:21:42.441201  346703 system_pods.go:89] "snapshot-controller-989f9ddc8-hk2r7" [1833572d-eb6d-45e7-8b28-551f8eeee620] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0810 22:21:42.441215  346703 system_pods.go:89] "snapshot-controller-989f9ddc8-srvnp" [1c02f364-7e32-44be-b4de-155380badbf8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0810 22:21:42.441223  346703 system_pods.go:89] "storage-provisioner" [0ce6de3a-b84b-4127-896d-f4765a56ccc4] Running
	I0810 22:21:42.441233  346703 system_pods.go:89] "tiller-deploy-768d69497-95j6c" [532cd5c7-b8b2-4b22-90de-521ff0e324e3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0810 22:21:42.441244  346703 system_pods.go:126] duration metric: took 206.858591ms to wait for k8s-apps to be running ...
	I0810 22:21:42.441257  346703 system_svc.go:44] waiting for kubelet service to be running ....
	I0810 22:21:42.441311  346703 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:21:42.451551  346703 system_svc.go:56] duration metric: took 10.287134ms WaitForService to wait for kubelet.
	I0810 22:21:42.451577  346703 kubeadm.go:547] duration metric: took 49.62126778s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0810 22:21:42.451608  346703 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:21:42.591702  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:42.659879  346703 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0810 22:21:42.659912  346703 node_conditions.go:123] node cpu capacity is 8
	I0810 22:21:42.659933  346703 node_conditions.go:105] duration metric: took 208.315404ms to run NodePressure ...
	I0810 22:21:42.659950  346703 start.go:231] waiting for startup goroutines ...
	I0810 22:21:42.669802  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:42.693326  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:43.089952  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:43.169679  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:43.191920  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:43.590633  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:43.669572  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:43.691555  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:44.092204  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:44.169988  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:44.190290  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:44.590574  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:44.668849  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:44.691517  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:45.090951  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:45.169757  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:45.192086  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:45.591624  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:45.669840  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:45.691848  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:46.091069  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:46.169917  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:46.192165  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:46.590741  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:46.669520  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:46.691232  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:47.167455  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:47.169762  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:47.359594  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:47.663082  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:47.672294  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:47.763660  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:48.160147  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:48.169804  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:48.262718  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:48.590572  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:48.669467  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:48.690758  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:49.091132  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:49.168981  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:49.191190  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:49.659419  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:49.669557  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:49.691479  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:50.091410  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:50.170008  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:50.191223  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:50.590873  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:50.669440  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:50.691305  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:51.090946  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:51.169643  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:51.191086  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:51.591229  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:51.670743  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:51.691130  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:52.090849  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:52.168877  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:52.191512  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:52.590644  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:52.668677  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:52.691323  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:53.091691  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:53.169011  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:53.191028  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:53.590854  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:53.670224  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:53.690125  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:54.090988  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:54.169602  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:54.190716  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:54.590409  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:54.669832  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:54.692139  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:55.162918  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:55.168773  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:55.261442  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:55.590740  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:55.670297  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:55.690868  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:56.090495  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:56.169844  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:56.191265  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:56.591052  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:56.672147  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:56.690197  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:57.091448  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:57.169859  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:57.192250  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:57.591272  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:57.669933  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:57.690966  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:58.090897  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:58.169604  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:58.190654  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:58.663016  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:58.673622  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:58.774025  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:59.090731  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:59.169383  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:59.191403  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:21:59.594355  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:21:59.669498  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:21:59.691544  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:00.161863  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:00.170784  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:00.265706  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:00.591263  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:00.670913  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:00.691094  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:01.091263  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:01.171437  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:01.191122  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:01.591285  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:01.670604  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:01.691574  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:02.164500  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:02.172275  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:02.265897  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:02.770622  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:02.770706  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:02.772390  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:03.162599  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:03.170540  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:03.261490  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:03.662120  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:03.669753  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:04.369965  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:04.370209  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:04.370766  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:04.662511  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:04.675609  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:04.764141  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:05.166564  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:05.170529  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:05.263271  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:05.663690  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:05.670166  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:05.763155  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:06.161770  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:06.171427  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:06.190315  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:06.663513  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:06.672009  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:06.766466  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:07.166120  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:07.173986  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:07.265431  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:07.661090  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:07.673364  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:07.783172  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:08.162315  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:08.169678  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:08.265517  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:08.663108  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:08.672592  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:08.768395  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:09.161212  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:09.171081  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:09.262878  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:09.661512  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:09.676794  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:09.763977  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:10.162133  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:10.173867  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:10.264682  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:10.664433  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:10.670575  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:10.764000  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:11.164170  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:11.169491  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:11.363831  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:11.661053  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:11.669921  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:11.764772  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:12.161436  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:12.171080  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:12.269700  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:12.662688  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:12.671059  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:12.761889  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:13.164018  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:13.169723  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:13.263285  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:13.683100  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:14.071358  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:14.071472  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:14.166125  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:14.173479  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:14.274441  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:14.591243  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:14.672371  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:14.763563  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:15.179621  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:15.180506  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:15.262226  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:15.661548  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:15.673719  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:15.765173  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:16.167976  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:16.171299  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:16.269250  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:16.661416  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:16.679548  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:16.762187  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:17.162828  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:17.169918  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:17.264408  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:17.666481  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:17.681979  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:17.769188  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:18.162081  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:18.170822  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:18.265640  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:18.662623  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:18.670756  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:18.764020  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:19.162159  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:19.170648  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:19.266888  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:19.661146  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:19.670927  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:19.762846  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:20.161841  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:20.169728  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:20.263124  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:20.663024  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:20.671182  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:20.770620  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:21.170573  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:21.175405  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:21.267214  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:21.662384  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:21.677780  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:21.768062  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:22.161461  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:22.171944  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:22.263042  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:22.591086  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:22.670793  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:22.694209  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:23.090719  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:23.170043  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:23.190947  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:23.591102  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:23.669547  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:23.690742  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:24.831613  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:22:24.831874  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:24.831888  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:25.095415  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:25.174338  346703 kapi.go:108] duration metric: took 1m26.091748622s to wait for kubernetes.io/minikube-addons=registry ...
	I0810 22:22:25.191481  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:25.590976  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:25.691886  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:26.091113  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:26.191890  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:26.666511  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:26.767861  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:27.173275  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:27.265385  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:27.659822  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:27.764451  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:28.167058  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:28.267467  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:28.591532  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:28.692192  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:29.091304  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:29.190595  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:29.590593  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:29.691350  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:30.092157  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:30.191309  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:30.590871  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:30.691646  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:31.091542  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:31.190990  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:31.591019  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:31.690593  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:32.091135  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:32.192290  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:32.590326  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:32.691049  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:33.092340  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:33.191267  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:33.591901  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:33.691586  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:34.090989  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:34.192331  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:34.591689  346703 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:22:34.690910  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:35.241262  346703 kapi.go:108] duration metric: took 1m37.755528691s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0810 22:22:35.242997  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:35.692981  346703 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:22:36.191525  346703 kapi.go:108] duration metric: took 1m35.519918113s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0810 22:22:36.194051  346703 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, helm-tiller, metrics-server, olm, volumesnapshots, registry, ingress, csi-hostpath-driver
	I0810 22:22:36.194079  346703 addons.go:344] enableAddons completed in 1m43.363428457s
	I0810 22:22:36.239293  346703 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0810 22:22:36.241636  346703 out.go:177] * Done! kubectl is now configured to use "addons-20210810222001-345780" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Tue 2021-08-10 22:20:12 UTC, end at Tue 2021-08-10 22:28:25 UTC. --
	Aug 10 22:24:01 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:24:01.228250962Z" level=info msg="Created container 5d9377e9b832ca98a776799c2149e98262a2df7c4076b45533fcf16ea9168215: my-etcd/etcd-operator-85cd4f54cd-2hf2t/etcd-restore-operator" id=a60522b6-fbac-493f-a7fd-54884ede7bca name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 10 22:24:01 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:24:01.228851703Z" level=info msg="Starting container: 5d9377e9b832ca98a776799c2149e98262a2df7c4076b45533fcf16ea9168215" id=62f9865b-da94-4603-b6cc-4607f36bc56e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 10 22:24:01 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:24:01.240092404Z" level=info msg="Started container 5d9377e9b832ca98a776799c2149e98262a2df7c4076b45533fcf16ea9168215: my-etcd/etcd-operator-85cd4f54cd-2hf2t/etcd-restore-operator" id=62f9865b-da94-4603-b6cc-4607f36bc56e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 10 22:24:43 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:24:43.703347719Z" level=info msg="Stopping pod sandbox: f8a9e72c0dae50a39edb65d74ed9608b83d62d1020284736589172c44167edc9" id=c79d9f10-75b5-45f3-984b-5f0b29eda824 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:24:43 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:24:43.703400637Z" level=info msg="Stopped pod sandbox (already stopped): f8a9e72c0dae50a39edb65d74ed9608b83d62d1020284736589172c44167edc9" id=c79d9f10-75b5-45f3-984b-5f0b29eda824 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:24:43 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:24:43.703672739Z" level=info msg="Removing pod sandbox: f8a9e72c0dae50a39edb65d74ed9608b83d62d1020284736589172c44167edc9" id=a3609c53-5b95-479c-bebb-301b350fe92c name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 10 22:24:43 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:24:43.809105300Z" level=info msg="Removed pod sandbox: f8a9e72c0dae50a39edb65d74ed9608b83d62d1020284736589172c44167edc9" id=a3609c53-5b95-479c-bebb-301b350fe92c name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 10 22:24:43 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:24:43.809620175Z" level=info msg="Stopping pod sandbox: b0205944c42f3a4430b678fa1ed5e7fc529b362270931c1f8818893bb7228ef7" id=728c5d5b-9205-4265-a5d1-88674b6ecb56 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:24:43 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:24:43.809669030Z" level=info msg="Stopped pod sandbox (already stopped): b0205944c42f3a4430b678fa1ed5e7fc529b362270931c1f8818893bb7228ef7" id=728c5d5b-9205-4265-a5d1-88674b6ecb56 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:24:43 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:24:43.809996539Z" level=info msg="Removing pod sandbox: b0205944c42f3a4430b678fa1ed5e7fc529b362270931c1f8818893bb7228ef7" id=10905813-68d5-49cb-97e3-37acdd9ee255 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 10 22:24:43 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:24:43.913079320Z" level=info msg="Removed pod sandbox: b0205944c42f3a4430b678fa1ed5e7fc529b362270931c1f8818893bb7228ef7" id=10905813-68d5-49cb-97e3-37acdd9ee255 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 10 22:25:40 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:25:40.508707539Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=346a2798-1dc6-4b2d-b772-f64042143f33 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:25:40 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:25:40.509332973Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253,RepoTags:[k8s.gcr.io/pause:3.4.1],RepoDigests:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2],Size_:689817,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=346a2798-1dc6-4b2d-b772-f64042143f33 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:27:57 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:27:57.317188862Z" level=info msg="Stopping container: 0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f (timeout: 29s)" id=058c15d3-687d-43c6-abb3-80a13ca7750a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 10 22:28:07 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:28:07.540511817Z" level=info msg="Stopped container 0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f: ingress-nginx/ingress-nginx-controller-59b45fb494-9fhc2/controller" id=058c15d3-687d-43c6-abb3-80a13ca7750a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 10 22:28:07 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:28:07.541131511Z" level=info msg="Stopping pod sandbox: 253c9470f6d52305eba2f5db266fed0a08813a10358d37bb3e92667406bd0a33" id=227664a1-f349-48e9-ad43-4299bb2e745b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:28:07 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:28:07.552671954Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-59b45fb494-9fhc2 Namespace:ingress-nginx ID:253c9470f6d52305eba2f5db266fed0a08813a10358d37bb3e92667406bd0a33 NetNS:/var/run/netns/30d5f72a-6103-4f3d-88ee-d41d0f4d1907 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 10 22:28:07 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:28:07.552833880Z" level=info msg="About to del CNI network kindnet (type=ptp)"
	Aug 10 22:28:07 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:28:07.812180469Z" level=info msg="Stopped pod sandbox: 253c9470f6d52305eba2f5db266fed0a08813a10358d37bb3e92667406bd0a33" id=227664a1-f349-48e9-ad43-4299bb2e745b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:28:08 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:28:08.142201777Z" level=info msg="Stopping pod sandbox: 253c9470f6d52305eba2f5db266fed0a08813a10358d37bb3e92667406bd0a33" id=fb881af9-9f2c-4e80-9ec1-51dfb909ddc2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:28:08 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:28:08.142249846Z" level=info msg="Stopped pod sandbox (already stopped): 253c9470f6d52305eba2f5db266fed0a08813a10358d37bb3e92667406bd0a33" id=fb881af9-9f2c-4e80-9ec1-51dfb909ddc2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:28:08 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:28:08.143017607Z" level=info msg="Removing container: 0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f" id=042d4202-0e90-470f-ad6e-3e0f2ac355ab name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 10 22:28:08 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:28:08.163067188Z" level=info msg="Removed container 0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f: ingress-nginx/ingress-nginx-controller-59b45fb494-9fhc2/controller" id=042d4202-0e90-470f-ad6e-3e0f2ac355ab name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 10 22:28:09 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:28:09.145425879Z" level=info msg="Stopping pod sandbox: 253c9470f6d52305eba2f5db266fed0a08813a10358d37bb3e92667406bd0a33" id=68bffde1-49da-4df9-93b0-e9c2881ab500 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:28:09 addons-20210810222001-345780 crio[372]: time="2021-08-10 22:28:09.145471547Z" level=info msg="Stopped pod sandbox (already stopped): 253c9470f6d52305eba2f5db266fed0a08813a10358d37bb3e92667406bd0a33" id=68bffde1-49da-4df9-93b0-e9c2881ab500 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	5d9377e9b832c       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                4 minutes ago       Running             etcd-restore-operator     0                   b04751980bef6
	8cdf30081928d       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                4 minutes ago       Running             etcd-backup-operator      0                   b04751980bef6
	1a98b8b75cfbd       quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b                                            4 minutes ago       Running             etcd-operator             0                   b04751980bef6
	934e1634e12d1       europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8   4 minutes ago       Running             private-image-eu          0                   fc7b2ae1239ab
	c9623ba0eb4e5       docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce                                                 5 minutes ago       Running             nginx                     0                   cf49c0a61b7dc
	9b658695bf047       us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8                5 minutes ago       Running             private-image             0                   9a8ea9b130105
	fcf93cb127b92       docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                               5 minutes ago       Running             busybox                   0                   802331c5573bc
	9e492ba4abe6f       a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8                                                                                6 minutes ago       Exited              patch                     2                   fde1b944c7157
	d8b1e99b4310c       quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0                 6 minutes ago       Running             registry-server           0                   1b73725fbcee5
	8e368f5039948       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             packageserver             0                   5e9b2cc5e58b2
	696be08f0f6d5       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             packageserver             0                   3e7c9f266550b
	2073520c1e10a       a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8                                                                                6 minutes ago       Exited              create                    0                   0dd8a315b846d
	f9604fda8d5c6       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                                                                6 minutes ago       Running             coredns                   0                   03669523496d0
	c64c6f3d4354f       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             catalog-operator          0                   da93ab29a0ed6
	f2a50b1283ce2       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             olm-operator              0                   e62c332357565
	da89f8b674c32       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                                7 minutes ago       Running             storage-provisioner       0                   059cd4bdc47ad
	928f212d1a69d       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                                                                7 minutes ago       Running             kindnet-cni               0                   61eea836abeed
	0eb7551b53331       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                                                                7 minutes ago       Running             kube-proxy                0                   0da4ad92b3c52
	7292752f9aa72       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                                                                7 minutes ago       Running             kube-apiserver            0                   6454161199d03
	c3b53a5beeea6       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                                                                7 minutes ago       Running             etcd                      0                   e1abb603cd8c3
	18881bd74477c       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                                                                7 minutes ago       Running             kube-scheduler            0                   a50660b04b66e
	cb6a8c04a84b3       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                                                                7 minutes ago       Running             kube-controller-manager   0                   4ea50f7ff1a00
	
	* 
	* ==> coredns [f9604fda8d5c6107279c887cf0cb083426c6d844bce1567008a0d7cf3f234012] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210810222001-345780
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20210810222001-345780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=addons-20210810222001-345780
	                    minikube.k8s.io/updated_at=2021_08_10T22_20_35_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210810222001-345780
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Aug 2021 22:20:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210810222001-345780
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Aug 2021 22:28:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Aug 2021 22:24:22 +0000   Tue, 10 Aug 2021 22:20:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Aug 2021 22:24:22 +0000   Tue, 10 Aug 2021 22:20:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Aug 2021 22:24:22 +0000   Tue, 10 Aug 2021 22:20:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Aug 2021 22:24:22 +0000   Tue, 10 Aug 2021 22:20:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210810222001-345780
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 760e67beb8554645829f2357c8eb4ae7
	  System UUID:                441c6e30-9f31-446e-a66b-425e65be1d33
	  Boot ID:                    73822e98-d94c-4da2-a874-acfa9b587b30
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  default                     nginx                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  default                     private-image-7ff9c8c74f-gmpwx                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  default                     private-image-eu-5956d58f9f-cfdfl                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 coredns-558bd4d5db-brbjv                                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     7m33s
	  kube-system                 etcd-addons-20210810222001-345780                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m45s
	  kube-system                 kindnet-89zpj                                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m33s
	  kube-system                 kube-apiserver-addons-20210810222001-345780             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-controller-manager-addons-20210810222001-345780    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-proxy-qk8m9                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	  kube-system                 kube-scheduler-addons-20210810222001-345780             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	  my-etcd                     etcd-operator-85cd4f54cd-2hf2t                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  olm                         catalog-operator-75d496484d-hrvkz                       10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (0%!)(MISSING)        0 (0%!)(MISSING)         7m26s
	  olm                         olm-operator-859c88c96-rc8mk                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m27s
	  olm                         operatorhubio-catalog-7pg5t                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         6m46s
	  olm                         packageserver-675b7f455c-4qc9n                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  olm                         packageserver-675b7f455c-tltfn                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                880m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             510Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 7m45s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m45s  kubelet     Node addons-20210810222001-345780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m45s  kubelet     Node addons-20210810222001-345780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m45s  kubelet     Node addons-20210810222001-345780 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m35s  kubelet     Node addons-20210810222001-345780 status is now: NodeReady
	  Normal  Starting                 7m32s  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000003] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	[  +4.031751] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	[  +8.191492] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000004] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	[  +6.088813] IPv4: martian source 10.244.0.34 from 10.244.0.34, on dev vethab6a65b0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 66 45 6e 14 53 de 08 06        ......fEn.S...
	[Aug10 22:24] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	[ +34.045759] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	[Aug10 22:25] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	[  +1.010121] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	[  +2.015846] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	[  +4.063692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	[Aug10 22:26] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	[ +16.126913] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	[ +33.533839] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: 46 da a8 cb 4d 51 f2 0e 6a 14 ea 80 08 00        F...MQ..j.....
	
	* 
	* ==> etcd [1a98b8b75cfbdc3d06e3069c2ba8b1aecc912716b11597240548e78b239b43a7] <==
	* time="2021-08-10T22:24:00Z" level=info msg="etcd-operator Version: 0.9.4"
	time="2021-08-10T22:24:00Z" level=info msg="Git SHA: c8a1c64"
	time="2021-08-10T22:24:00Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-10T22:24:00Z" level=info msg="Go OS/Arch: linux/amd64"
	E0810 22:24:00.854949       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"d1027002-264b-44d4-9186-bb0fba04fb6c", ResourceVersion:"2123", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231040, loc:(*time.Location)(0x20d4640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-2hf2t\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-10T22:24:00Z\",\"renewTime\":\"2021-08-10T22:24:00Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not r
eport event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-2hf2t became leader'
	
	* 
	* ==> etcd [5d9377e9b832ca98a776799c2149e98262a2df7c4076b45533fcf16ea9168215] <==
	* time="2021-08-10T22:24:01Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-10T22:24:01Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-10T22:24:01Z" level=info msg="etcd-restore-operator Version: 0.9.4"
	time="2021-08-10T22:24:01Z" level=info msg="Git SHA: c8a1c64"
	E0810 22:24:01.282223       1 leaderelection.go:274] error initially creating leader election record: endpoints "etcd-restore-operator" already exists
	E0810 22:24:04.741068       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-restore-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"e9fcc478-46f1-462c-b6f4-d01597b8943d", ResourceVersion:"2228", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231041, loc:(*time.Location)(0x24e11a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"etcd-operator-alm-owned"}, Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-2hf2t\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-10T22:24:04Z\",\"renewTime\":\"2021-08-10T22:24:04Z\",\"leaderTransitions\":1}", "endpoints.kubernetes.io/last-change-trigger-time":"2021-08-10T22:24:01Z"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), Cl
usterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-2hf2t became leader'
	time="2021-08-10T22:24:04Z" level=info msg="listening on 0.0.0.0:19999"
	time="2021-08-10T22:24:04Z" level=info msg="starting restore controller" pkg=controller
	
	* 
	* ==> etcd [8cdf30081928d22213a55d128a04c74f3cdf0cd81e82818e2dd557a39d34b084] <==
	* time="2021-08-10T22:24:01Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-10T22:24:01Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-10T22:24:01Z" level=info msg="etcd-backup-operator Version: 0.9.4"
	time="2021-08-10T22:24:01Z" level=info msg="Git SHA: c8a1c64"
	E0810 22:24:01.072024       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-backup-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"856bc4cf-a091-4a92-9c2d-66f92a8902f9", ResourceVersion:"2129", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231041, loc:(*time.Location)(0x25824c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-2hf2t\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-10T22:24:01Z\",\"renewTime\":\"2021-08-10T22:24:01Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Wil
l not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-2hf2t became leader'
	time="2021-08-10T22:24:01Z" level=info msg="starting backup controller" pkg=controller
	
	* 
	* ==> etcd [c3b53a5beeea6a0b5d57888d0f9d9659f6648c0d64934e9dc570eddbad4aae14] <==
	* 2021-08-10 22:24:21.779455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:24:31.779117 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:24:41.779223 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:24:51.779534 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:01.779270 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:11.779518 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:21.779396 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:31.779157 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:41.778534 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:51.779418 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:26:01.779138 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:26:11.778908 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:26:21.779548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:26:31.779589 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:26:41.778790 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:26:51.779125 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:27:01.779658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:27:11.779373 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:27:21.780225 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:27:31.778893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:27:41.779461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:27:51.779172 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:28:01.779120 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:28:11.778649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:28:21.779159 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:28:25 up  2:11,  0 users,  load average: 4.48, 4.23, 3.31
	Linux addons-20210810222001-345780 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [7292752f9aa72ddc658daf3d4b09a714609c45608f590de7e289b9d84874da1c] <==
	* I0810 22:24:00.765838       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0810 22:24:11.629089       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:24:11.629143       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:24:11.629152       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:24:45.277980       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:24:45.278028       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:24:45.278036       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:25:18.270755       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:25:18.270813       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:25:18.270826       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:25:51.302426       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:25:51.302471       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:25:51.302480       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:26:34.435750       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:26:34.435801       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:26:34.435812       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:27:15.654970       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:27:15.655026       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:27:15.655036       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0810 22:27:56.317838       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0810 22:27:58.219052       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:27:58.219105       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:27:58.219114       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0810 22:27:59.430249       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0810 22:28:06.933990       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [cb6a8c04a84b3301d0d48a0072c3e9d96c04f7917f9a25db8b0ca5bdb83271bd] <==
	* E0810 22:23:54.858520       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:24:01.246287       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:24:03.975492       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:24:04.377191       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:24:17.329118       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:24:23.625244       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:24:25.379313       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:24:45.776455       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:25:03.247224       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:25:06.037727       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:25:34.096130       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:25:38.084113       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:25:47.755062       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:26:09.821634       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:26:22.797486       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:26:28.623729       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:26:52.458446       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:27:03.805658       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:27:15.238516       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:27:27.061145       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:27:54.005101       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:28:01.167752       1 tokens_controller.go:262] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-7v4tx" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	E0810 22:28:01.402506       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:28:08.365567       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:28:24.160777       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [0eb7551b53331b290e86aea660402350633d99a6940577d889c8e674b7d2adae] <==
	* I0810 22:20:53.086234       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0810 22:20:53.086323       1 server_others.go:140] Detected node IP 192.168.49.2
	W0810 22:20:53.086376       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0810 22:20:53.260666       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0810 22:20:53.261316       1 server_others.go:212] Using iptables Proxier.
	I0810 22:20:53.261341       1 server_others.go:219] creating dualStackProxier for iptables.
	W0810 22:20:53.261357       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0810 22:20:53.261938       1 server.go:643] Version: v1.21.3
	I0810 22:20:53.270670       1 config.go:315] Starting service config controller
	I0810 22:20:53.275251       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0810 22:20:53.271560       1 config.go:224] Starting endpoint slice config controller
	I0810 22:20:53.275515       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0810 22:20:53.360645       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0810 22:20:53.362774       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0810 22:20:53.376980       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0810 22:20:53.377028       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [18881bd74477c2e58f5aaa5f097f87e5fb31e9fe66b356f6d0ebc5b1546babe2] <==
	* W0810 22:20:32.092053       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0810 22:20:32.092065       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0810 22:20:32.092077       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0810 22:20:32.178400       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0810 22:20:32.178563       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0810 22:20:32.178581       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0810 22:20:32.178928       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0810 22:20:32.181613       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:20:32.181663       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0810 22:20:32.181820       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:20:32.258032       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0810 22:20:32.257988       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:20:32.258254       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:20:32.258298       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0810 22:20:32.258792       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:20:32.258905       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0810 22:20:32.258957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:20:32.258971       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:20:32.258986       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:20:32.259001       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:20:32.259000       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:20:33.045319       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:20:33.170008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:20:33.220651       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0810 22:20:33.679677       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-10 22:20:12 UTC, end at Tue 2021-08-10 22:28:25 UTC. --
	Aug 10 22:27:53 addons-20210810222001-345780 kubelet[1567]: I0810 22:27:53.562164    1567 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-gmpwx" secret="" err="secret \"gcp-auth\" not found"
	Aug 10 22:27:54 addons-20210810222001-345780 kubelet[1567]: E0810 22:27:54.152048    1567 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:27:56 addons-20210810222001-345780 kubelet[1567]: E0810 22:27:56.257386    1567 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-9fhc2.169a126d5e3eb8f8", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-9fhc2", UID:"9edac9dd-f002-42a1-8d69-fb3795a223a1", APIVersion:"v1", ResourceVersion:"617", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopp
ing container controller", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210810222001-345780"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd3b0b4f80f8, ext:441126106391, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd3b0b4f80f8, ext:441126106391, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-9fhc2.169a126d5e3eb8f8" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 10 22:27:57 addons-20210810222001-345780 kubelet[1567]: E0810 22:27:57.369102    1567 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-9fhc2.169a126da456ae28", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-9fhc2", UID:"9edac9dd-f002-42a1-8d69-fb3795a223a1", APIVersion:"v1", ResourceVersion:"617", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Liv
eness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210810222001-345780"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd3b55ccac28, ext:442302081616, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd3b55ccac28, ext:442302081616, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-9fhc2.169a126da456ae28" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 10 22:27:57 addons-20210810222001-345780 kubelet[1567]: E0810 22:27:57.370694    1567 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-9fhc2.169a126da456b067", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-9fhc2", UID:"9edac9dd-f002-42a1-8d69-fb3795a223a1", APIVersion:"v1", ResourceVersion:"617", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Rea
diness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210810222001-345780"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd3b55ccae67, ext:442302082229, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd3b55ccae67, ext:442302082229, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-9fhc2.169a126da456b067" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 10 22:28:00 addons-20210810222001-345780 kubelet[1567]: I0810 22:28:00.563414    1567 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 10 22:28:04 addons-20210810222001-345780 kubelet[1567]: E0810 22:28:04.279865    1567 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:28:07 addons-20210810222001-345780 kubelet[1567]: E0810 22:28:07.373852    1567 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-9fhc2.169a126da456b067", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-9fhc2", UID:"9edac9dd-f002-42a1-8d69-fb3795a223a1", APIVersion:"v1", ResourceVersion:"617", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Rea
diness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210810222001-345780"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd3b55ccae67, ext:442302082229, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd3dd5d1fe60, ext:452302430364, loc:(*time.Location)(0x74c3600)}}, Count:2, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-9fhc2.169a126da456b067" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 10 22:28:07 addons-20210810222001-345780 kubelet[1567]: E0810 22:28:07.377272    1567 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-9fhc2.169a126da456ae28", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-9fhc2", UID:"9edac9dd-f002-42a1-8d69-fb3795a223a1", APIVersion:"v1", ResourceVersion:"617", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Liv
eness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210810222001-345780"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd3b55ccac28, ext:442302081616, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd3dd5d2301c, ext:452302443075, loc:(*time.Location)(0x74c3600)}}, Count:2, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-9fhc2.169a126da456ae28" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 10 22:28:08 addons-20210810222001-345780 kubelet[1567]: I0810 22:28:08.141949    1567 scope.go:111] "RemoveContainer" containerID="0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f"
	Aug 10 22:28:08 addons-20210810222001-345780 kubelet[1567]: I0810 22:28:08.163340    1567 scope.go:111] "RemoveContainer" containerID="0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f"
	Aug 10 22:28:08 addons-20210810222001-345780 kubelet[1567]: E0810 22:28:08.163679    1567 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f\": container with ID starting with 0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f not found: ID does not exist" containerID="0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f"
	Aug 10 22:28:08 addons-20210810222001-345780 kubelet[1567]: I0810 22:28:08.163729    1567 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f} err="failed to get container status \"0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f\": rpc error: code = NotFound desc = could not find container \"0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f\": container with ID starting with 0b2ca7766933422799c31c039e1183f98f50a9014e55f8a7f0203d5025fbf20f not found: ID does not exist"
	Aug 10 22:28:08 addons-20210810222001-345780 kubelet[1567]: I0810 22:28:08.287825    1567 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snqlh\" (UniqueName: \"kubernetes.io/projected/9edac9dd-f002-42a1-8d69-fb3795a223a1-kube-api-access-snqlh\") pod \"9edac9dd-f002-42a1-8d69-fb3795a223a1\" (UID: \"9edac9dd-f002-42a1-8d69-fb3795a223a1\") "
	Aug 10 22:28:08 addons-20210810222001-345780 kubelet[1567]: I0810 22:28:08.287900    1567 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9edac9dd-f002-42a1-8d69-fb3795a223a1-webhook-cert\") pod \"9edac9dd-f002-42a1-8d69-fb3795a223a1\" (UID: \"9edac9dd-f002-42a1-8d69-fb3795a223a1\") "
	Aug 10 22:28:08 addons-20210810222001-345780 kubelet[1567]: I0810 22:28:08.317536    1567 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9edac9dd-f002-42a1-8d69-fb3795a223a1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9edac9dd-f002-42a1-8d69-fb3795a223a1" (UID: "9edac9dd-f002-42a1-8d69-fb3795a223a1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 10 22:28:08 addons-20210810222001-345780 kubelet[1567]: I0810 22:28:08.317547    1567 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9edac9dd-f002-42a1-8d69-fb3795a223a1-kube-api-access-snqlh" (OuterVolumeSpecName: "kube-api-access-snqlh") pod "9edac9dd-f002-42a1-8d69-fb3795a223a1" (UID: "9edac9dd-f002-42a1-8d69-fb3795a223a1"). InnerVolumeSpecName "kube-api-access-snqlh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 10 22:28:08 addons-20210810222001-345780 kubelet[1567]: I0810 22:28:08.388564    1567 reconciler.go:319] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9edac9dd-f002-42a1-8d69-fb3795a223a1-webhook-cert\") on node \"addons-20210810222001-345780\" DevicePath \"\""
	Aug 10 22:28:08 addons-20210810222001-345780 kubelet[1567]: I0810 22:28:08.388604    1567 reconciler.go:319] "Volume detached for volume \"kube-api-access-snqlh\" (UniqueName: \"kubernetes.io/projected/9edac9dd-f002-42a1-8d69-fb3795a223a1-kube-api-access-snqlh\") on node \"addons-20210810222001-345780\" DevicePath \"\""
	Aug 10 22:28:14 addons-20210810222001-345780 kubelet[1567]: E0810 22:28:14.402520    1567 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:28:14 addons-20210810222001-345780 kubelet[1567]: E0810 22:28:14.417483    1567 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/9edac9dd-f002-42a1-8d69-fb3795a223a1/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-9fhc2"
	Aug 10 22:28:24 addons-20210810222001-345780 kubelet[1567]: W0810 22:28:24.312971    1567 container.go:586] Failed to update stats for container "/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd": /sys/fs/cgroup/cpuset/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd/cpuset.cpus found to be empty, continuing to push stats
	Aug 10 22:28:24 addons-20210810222001-345780 kubelet[1567]: W0810 22:28:24.514697    1567 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 10 22:28:24 addons-20210810222001-345780 kubelet[1567]: E0810 22:28:24.523340    1567 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd/docker/88caeb25c14e7b7168bf3a516ee48ff40faaa567023ad05a7290db0ddac2a7cd\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:28:24 addons-20210810222001-345780 kubelet[1567]: E0810 22:28:24.533821    1567 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/9edac9dd-f002-42a1-8d69-fb3795a223a1/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-9fhc2"
	
	* 
	* ==> storage-provisioner [da89f8b674c3291be8ec674b52563f361ef049862420957c3d9077251f96d2a0] <==
	* I0810 22:20:58.267085       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0810 22:20:58.286031       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0810 22:20:58.286110       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0810 22:20:58.374475       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0810 22:20:58.375068       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210810222001-345780_bea5ee36-09ef-4f65-812c-b2a95753915e!
	I0810 22:20:58.375128       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dc76dcb2-4e46-46b2-80d9-5ed5e80af41a", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210810222001-345780_bea5ee36-09ef-4f65-812c-b2a95753915e became leader
	I0810 22:20:58.559060       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210810222001-345780_bea5ee36-09ef-4f65-812c-b2a95753915e!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210810222001-345780 -n addons-20210810222001-345780
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210810222001-345780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context addons-20210810222001-345780 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context addons-20210810222001-345780 describe pod : exit status 1 (52.076982ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context addons-20210810222001-345780 describe pod : exit status 1
--- FAIL: TestAddons/parallel/Ingress (305.38s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-crhdk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-crhdk -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-crhdk -- sh -c "ping -c 1 192.168.49.1": exit status 1 (195.91727ms)

                                                
                                                
-- stdout --
	PING 192.168.49.1 (192.168.49.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (192.168.49.1) from pod (busybox-84b6686758-crhdk): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-h8c2g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-h8c2g -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-h8c2g -- sh -c "ping -c 1 192.168.49.1": exit status 1 (180.328818ms)

                                                
                                                
-- stdout --
	PING 192.168.49.1 (192.168.49.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (192.168.49.1) from pod (busybox-84b6686758-h8c2g): exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect multinode-20210810223625-345780
helpers_test.go:236: (dbg) docker inspect multinode-20210810223625-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa",
	        "Created": "2021-08-10T22:36:26.898286148Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 412031,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-10T22:36:27.373971901Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/hosts",
	        "LogPath": "/var/lib/docker/containers/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa-json.log",
	        "Name": "/multinode-20210810223625-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20210810223625-345780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20210810223625-345780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e2ba7ee1cac18ce997d89e4697511f8485dc431e327b951ac52eac8347a07099-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2ba7ee1cac18ce997d89e4697511f8485dc431e327b951ac52eac8347a07099/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2ba7ee1cac18ce997d89e4697511f8485dc431e327b951ac52eac8347a07099/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2ba7ee1cac18ce997d89e4697511f8485dc431e327b951ac52eac8347a07099/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20210810223625-345780",
	                "Source": "/var/lib/docker/volumes/multinode-20210810223625-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20210810223625-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20210810223625-345780",
	                "name.minikube.sigs.k8s.io": "multinode-20210810223625-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "00c0ab33a2f6cb5af8ede693a943f0ecf53bc58b481e3c62c7415e90ba044d68",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/00c0ab33a2f6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20210810223625-345780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b91fa3f28869"
	                    ],
	                    "NetworkID": "9dce73d5f5ee979d46cabbc55a560a612cca06aad170103e552df048c4f7ae41",
	                    "EndpointID": "a1030e864099bdfde37831a26ceaf9c5d0bcc787b051d2229db3498b3bb2b9ad",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210810223625-345780 -n multinode-20210810223625-345780
helpers_test.go:245: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210810223625-345780 logs -n 25: (1.275349434s)
E0810 22:38:35.586649  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
helpers_test.go:253: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                 |   User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| pause   | -p                                                | json-output-20210810223309-345780       | testUser | v1.22.0 | Tue, 10 Aug 2021 22:34:40 UTC | Tue, 10 Aug 2021 22:34:40 UTC |
	|         | json-output-20210810223309-345780                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| unpause | -p                                                | json-output-20210810223309-345780       | testUser | v1.22.0 | Tue, 10 Aug 2021 22:34:40 UTC | Tue, 10 Aug 2021 22:34:41 UTC |
	|         | json-output-20210810223309-345780                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| stop    | -p                                                | json-output-20210810223309-345780       | testUser | v1.22.0 | Tue, 10 Aug 2021 22:34:41 UTC | Tue, 10 Aug 2021 22:34:52 UTC |
	|         | json-output-20210810223309-345780                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| delete  | -p                                                | json-output-20210810223309-345780       | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:52 UTC | Tue, 10 Aug 2021 22:34:58 UTC |
	|         | json-output-20210810223309-345780                 |                                         |          |         |                               |                               |
	| delete  | -p                                                | json-output-error-20210810223458-345780 | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:58 UTC | Tue, 10 Aug 2021 22:34:58 UTC |
	|         | json-output-error-20210810223458-345780           |                                         |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210810223458-345780    | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:58 UTC | Tue, 10 Aug 2021 22:35:29 UTC |
	|         | docker-network-20210810223458-345780              |                                         |          |         |                               |                               |
	|         | --network=                                        |                                         |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210810223458-345780    | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:35:29 UTC | Tue, 10 Aug 2021 22:35:32 UTC |
	|         | docker-network-20210810223458-345780              |                                         |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210810223532-345780    | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:35:32 UTC | Tue, 10 Aug 2021 22:35:55 UTC |
	|         | docker-network-20210810223532-345780              |                                         |          |         |                               |                               |
	|         | --network=bridge                                  |                                         |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210810223532-345780    | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:35:56 UTC | Tue, 10 Aug 2021 22:35:58 UTC |
	|         | docker-network-20210810223532-345780              |                                         |          |         |                               |                               |
	| start   | -p                                                | existing-network-20210810223558-345780  | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:35:58 UTC | Tue, 10 Aug 2021 22:36:22 UTC |
	|         | existing-network-20210810223558-345780            |                                         |          |         |                               |                               |
	|         | --network=existing-network                        |                                         |          |         |                               |                               |
	| delete  | -p                                                | existing-network-20210810223558-345780  | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:36:22 UTC | Tue, 10 Aug 2021 22:36:25 UTC |
	|         | existing-network-20210810223558-345780            |                                         |          |         |                               |                               |
	| start   | -p                                                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:36:25 UTC | Tue, 10 Aug 2021 22:38:27 UTC |
	|         | multinode-20210810223625-345780                   |                                         |          |         |                               |                               |
	|         | --wait=true --memory=2200                         |                                         |          |         |                               |                               |
	|         | --nodes=2 -v=8                                    |                                         |          |         |                               |                               |
	|         | --alsologtostderr                                 |                                         |          |         |                               |                               |
	|         | --driver=docker                                   |                                         |          |         |                               |                               |
	|         | --container-runtime=crio                          |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210810223625-345780 -- apply -f    | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:28 UTC | Tue, 10 Aug 2021 22:38:28 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:28 UTC | Tue, 10 Aug 2021 22:38:31 UTC |
	|         | multinode-20210810223625-345780                   |                                         |          |         |                               |                               |
	|         | -- rollout status                                 |                                         |          |         |                               |                               |
	|         | deployment/busybox                                |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210810223625-345780                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:31 UTC | Tue, 10 Aug 2021 22:38:31 UTC |
	|         | -- get pods -o                                    |                                         |          |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'               |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210810223625-345780                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:31 UTC | Tue, 10 Aug 2021 22:38:31 UTC |
	|         | -- get pods -o                                    |                                         |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:31 UTC | Tue, 10 Aug 2021 22:38:31 UTC |
	|         | multinode-20210810223625-345780                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-crhdk --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.io                            |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:31 UTC | Tue, 10 Aug 2021 22:38:31 UTC |
	|         | multinode-20210810223625-345780                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-h8c2g --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.io                            |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:31 UTC | Tue, 10 Aug 2021 22:38:32 UTC |
	|         | multinode-20210810223625-345780                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-crhdk --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.default                       |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:32 UTC | Tue, 10 Aug 2021 22:38:32 UTC |
	|         | multinode-20210810223625-345780                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-h8c2g --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.default                       |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210810223625-345780                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:32 UTC | Tue, 10 Aug 2021 22:38:32 UTC |
	|         | -- exec busybox-84b6686758-crhdk                  |                                         |          |         |                               |                               |
	|         | -- nslookup                                       |                                         |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local              |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210810223625-345780                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:32 UTC | Tue, 10 Aug 2021 22:38:32 UTC |
	|         | -- exec busybox-84b6686758-h8c2g                  |                                         |          |         |                               |                               |
	|         | -- nslookup                                       |                                         |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local              |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210810223625-345780                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:32 UTC | Tue, 10 Aug 2021 22:38:32 UTC |
	|         | -- get pods -o                                    |                                         |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:32 UTC | Tue, 10 Aug 2021 22:38:33 UTC |
	|         | multinode-20210810223625-345780                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-crhdk                          |                                         |          |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                         |          |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                         |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210810223625-345780         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:38:33 UTC | Tue, 10 Aug 2021 22:38:33 UTC |
	|         | multinode-20210810223625-345780                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-h8c2g                          |                                         |          |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                         |          |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                         |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                         |          |         |                               |                               |
	|---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:36:25
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:36:25.357368  411387 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:36:25.357497  411387 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:36:25.357509  411387 out.go:311] Setting ErrFile to fd 2...
	I0810 22:36:25.357515  411387 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:36:25.357646  411387 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:36:25.358000  411387 out.go:305] Setting JSON to false
	I0810 22:36:25.396434  411387 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":8347,"bootTime":1628626639,"procs":171,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:36:25.396587  411387 start.go:121] virtualization: kvm guest
	I0810 22:36:25.399946  411387 out.go:177] * [multinode-20210810223625-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:36:25.401656  411387 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:36:25.400157  411387 notify.go:169] Checking for updates...
	I0810 22:36:25.403535  411387 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:36:25.405331  411387 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:36:25.407171  411387 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:36:25.407417  411387 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:36:25.454689  411387 docker.go:132] docker version: linux-19.03.15
	I0810 22:36:25.454806  411387 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:36:25.537491  411387 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-10 22:36:25.490156586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:36:25.537619  411387 docker.go:244] overlay module found
	I0810 22:36:25.540236  411387 out.go:177] * Using the docker driver based on user configuration
	I0810 22:36:25.540270  411387 start.go:278] selected driver: docker
	I0810 22:36:25.540277  411387 start.go:751] validating driver "docker" against <nil>
	I0810 22:36:25.540310  411387 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0810 22:36:25.540365  411387 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:36:25.540403  411387 out.go:242] ! Your cgroup does not allow setting memory.
	I0810 22:36:25.542144  411387 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:36:25.543091  411387 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:36:25.624029  411387 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-10 22:36:25.578515616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:36:25.624165  411387 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0810 22:36:25.624334  411387 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0810 22:36:25.624361  411387 cni.go:93] Creating CNI manager for ""
	I0810 22:36:25.624367  411387 cni.go:154] 0 nodes found, recommending kindnet
	I0810 22:36:25.624379  411387 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0810 22:36:25.624389  411387 start_flags.go:277] config:
	{Name:multinode-20210810223625-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223625-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0810 22:36:25.627004  411387 out.go:177] * Starting control plane node multinode-20210810223625-345780 in cluster multinode-20210810223625-345780
	I0810 22:36:25.627072  411387 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:36:25.628944  411387 out.go:177] * Pulling base image ...
	I0810 22:36:25.628980  411387 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:36:25.629036  411387 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 22:36:25.629049  411387 cache.go:56] Caching tarball of preloaded images
	I0810 22:36:25.629080  411387 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0810 22:36:25.629792  411387 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 22:36:25.629827  411387 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0810 22:36:25.630835  411387 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/config.json ...
	I0810 22:36:25.631166  411387 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/config.json: {Name:mk951449075d3c2e8346454294882640342abbf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:36:25.719563  411387 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:36:25.719619  411387 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:36:25.719641  411387 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:36:25.719699  411387 start.go:313] acquiring machines lock for multinode-20210810223625-345780: {Name:mk603e11d070dbf128d8c272c2afc1f95432c7a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:36:25.719854  411387 start.go:317] acquired machines lock for "multinode-20210810223625-345780" in 132.987µs
	I0810 22:36:25.719882  411387 start.go:89] Provisioning new machine with config: &{Name:multinode-20210810223625-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223625-345780 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:36:25.719965  411387 start.go:126] createHost starting for "" (driver="docker")
	I0810 22:36:25.722783  411387 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0810 22:36:25.723049  411387 start.go:160] libmachine.API.Create for "multinode-20210810223625-345780" (driver="docker")
	I0810 22:36:25.723083  411387 client.go:168] LocalClient.Create starting
	I0810 22:36:25.723176  411387 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 22:36:25.723247  411387 main.go:130] libmachine: Decoding PEM data...
	I0810 22:36:25.723266  411387 main.go:130] libmachine: Parsing certificate...
	I0810 22:36:25.723369  411387 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 22:36:25.723387  411387 main.go:130] libmachine: Decoding PEM data...
	I0810 22:36:25.723398  411387 main.go:130] libmachine: Parsing certificate...
	I0810 22:36:25.723733  411387 cli_runner.go:115] Run: docker network inspect multinode-20210810223625-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0810 22:36:25.762634  411387 cli_runner.go:162] docker network inspect multinode-20210810223625-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0810 22:36:25.762723  411387 network_create.go:255] running [docker network inspect multinode-20210810223625-345780] to gather additional debugging logs...
	I0810 22:36:25.762749  411387 cli_runner.go:115] Run: docker network inspect multinode-20210810223625-345780
	W0810 22:36:25.802699  411387 cli_runner.go:162] docker network inspect multinode-20210810223625-345780 returned with exit code 1
	I0810 22:36:25.802741  411387 network_create.go:258] error running [docker network inspect multinode-20210810223625-345780]: docker network inspect multinode-20210810223625-345780: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20210810223625-345780
	I0810 22:36:25.802760  411387 network_create.go:260] output of [docker network inspect multinode-20210810223625-345780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20210810223625-345780
	
	** /stderr **
	I0810 22:36:25.802822  411387 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 22:36:25.842332  411387 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005f08c0] misses:0}
	I0810 22:36:25.842403  411387 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0810 22:36:25.842424  411387 network_create.go:106] attempt to create docker network multinode-20210810223625-345780 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0810 22:36:25.842491  411387 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20210810223625-345780
	I0810 22:36:25.912877  411387 network_create.go:90] docker network multinode-20210810223625-345780 192.168.49.0/24 created
	I0810 22:36:25.912998  411387 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20210810223625-345780" container
	I0810 22:36:25.913070  411387 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0810 22:36:25.949395  411387 cli_runner.go:115] Run: docker volume create multinode-20210810223625-345780 --label name.minikube.sigs.k8s.io=multinode-20210810223625-345780 --label created_by.minikube.sigs.k8s.io=true
	I0810 22:36:25.987209  411387 oci.go:102] Successfully created a docker volume multinode-20210810223625-345780
	I0810 22:36:25.987289  411387 cli_runner.go:115] Run: docker run --rm --name multinode-20210810223625-345780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210810223625-345780 --entrypoint /usr/bin/test -v multinode-20210810223625-345780:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0810 22:36:26.773915  411387 oci.go:106] Successfully prepared a docker volume multinode-20210810223625-345780
	W0810 22:36:26.773979  411387 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0810 22:36:26.773988  411387 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0810 22:36:26.774056  411387 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0810 22:36:26.774056  411387 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:36:26.774091  411387 kic.go:179] Starting extracting preloaded images to volume ...
	I0810 22:36:26.774155  411387 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210810223625-345780:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0810 22:36:26.855132  411387 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210810223625-345780 --name multinode-20210810223625-345780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210810223625-345780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210810223625-345780 --network multinode-20210810223625-345780 --ip 192.168.49.2 --volume multinode-20210810223625-345780:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0810 22:36:27.384070  411387 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780 --format={{.State.Running}}
	I0810 22:36:27.431960  411387 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780 --format={{.State.Status}}
	I0810 22:36:27.480020  411387 cli_runner.go:115] Run: docker exec multinode-20210810223625-345780 stat /var/lib/dpkg/alternatives/iptables
	I0810 22:36:27.615414  411387 oci.go:278] the created container "multinode-20210810223625-345780" has a running status.
	I0810 22:36:27.615454  411387 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa...
	I0810 22:36:27.792233  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0810 22:36:27.792288  411387 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0810 22:36:28.151342  411387 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780 --format={{.State.Status}}
	I0810 22:36:28.194021  411387 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0810 22:36:28.194042  411387 kic_runner.go:115] Args: [docker exec --privileged multinode-20210810223625-345780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0810 22:36:30.402081  411387 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210810223625-345780:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (3.627868201s)
	I0810 22:36:30.402115  411387 kic.go:188] duration metric: took 3.628021 seconds to extract preloaded images to volume
	I0810 22:36:30.402193  411387 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780 --format={{.State.Status}}
	I0810 22:36:30.440762  411387 machine.go:88] provisioning docker machine ...
	I0810 22:36:30.440811  411387 ubuntu.go:169] provisioning hostname "multinode-20210810223625-345780"
	I0810 22:36:30.440882  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:36:30.479059  411387 main.go:130] libmachine: Using SSH client type: native
	I0810 22:36:30.479244  411387 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33047 <nil> <nil>}
	I0810 22:36:30.479260  411387 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210810223625-345780 && echo "multinode-20210810223625-345780" | sudo tee /etc/hostname
	I0810 22:36:30.601823  411387 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210810223625-345780
	
	I0810 22:36:30.601924  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:36:30.642478  411387 main.go:130] libmachine: Using SSH client type: native
	I0810 22:36:30.642648  411387 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33047 <nil> <nil>}
	I0810 22:36:30.642667  411387 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210810223625-345780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210810223625-345780/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210810223625-345780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:36:30.756856  411387 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:36:30.756891  411387 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:36:30.756962  411387 ubuntu.go:177] setting up certificates
	I0810 22:36:30.756973  411387 provision.go:83] configureAuth start
	I0810 22:36:30.757027  411387 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210810223625-345780
	I0810 22:36:30.796799  411387 provision.go:137] copyHostCerts
	I0810 22:36:30.796879  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:36:30.796960  411387 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:36:30.796974  411387 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:36:30.797031  411387 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:36:30.797097  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:36:30.797119  411387 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:36:30.797123  411387 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:36:30.797140  411387 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:36:30.797175  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:36:30.797193  411387 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:36:30.797199  411387 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:36:30.797213  411387 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:36:30.797256  411387 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.multinode-20210810223625-345780 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210810223625-345780]
	I0810 22:36:30.940376  411387 provision.go:171] copyRemoteCerts
	I0810 22:36:30.940462  411387 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:36:30.940518  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:36:30.979148  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa Username:docker}
	I0810 22:36:31.064727  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0810 22:36:31.064795  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0810 22:36:31.082392  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0810 22:36:31.082455  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0810 22:36:31.100327  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0810 22:36:31.100397  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:36:31.117709  411387 provision.go:86] duration metric: configureAuth took 360.718422ms
	I0810 22:36:31.117745  411387 ubuntu.go:193] setting minikube options for container-runtime
	I0810 22:36:31.118044  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:36:31.156906  411387 main.go:130] libmachine: Using SSH client type: native
	I0810 22:36:31.157095  411387 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33047 <nil> <nil>}
	I0810 22:36:31.157116  411387 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:36:31.510327  411387 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:36:31.510371  411387 machine.go:91] provisioned docker machine in 1.069574486s
	I0810 22:36:31.510384  411387 client.go:171] LocalClient.Create took 5.787293044s
	I0810 22:36:31.510396  411387 start.go:168] duration metric: libmachine.API.Create for "multinode-20210810223625-345780" took 5.787347209s
	I0810 22:36:31.510406  411387 start.go:267] post-start starting for "multinode-20210810223625-345780" (driver="docker")
	I0810 22:36:31.510412  411387 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:36:31.510482  411387 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:36:31.510528  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:36:31.549277  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa Username:docker}
	I0810 22:36:31.632417  411387 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:36:31.635100  411387 command_runner.go:124] > NAME="Ubuntu"
	I0810 22:36:31.635123  411387 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0810 22:36:31.635130  411387 command_runner.go:124] > ID=ubuntu
	I0810 22:36:31.635136  411387 command_runner.go:124] > ID_LIKE=debian
	I0810 22:36:31.635142  411387 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0810 22:36:31.635151  411387 command_runner.go:124] > VERSION_ID="20.04"
	I0810 22:36:31.635160  411387 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0810 22:36:31.635171  411387 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0810 22:36:31.635186  411387 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0810 22:36:31.635199  411387 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0810 22:36:31.635210  411387 command_runner.go:124] > VERSION_CODENAME=focal
	I0810 22:36:31.635219  411387 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0810 22:36:31.635283  411387 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0810 22:36:31.635311  411387 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0810 22:36:31.635325  411387 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0810 22:36:31.635337  411387 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0810 22:36:31.635352  411387 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:36:31.635405  411387 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:36:31.635502  411387 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> 3457802.pem in /etc/ssl/certs
	I0810 22:36:31.635514  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> /etc/ssl/certs/3457802.pem
	I0810 22:36:31.635615  411387 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:36:31.641877  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:36:31.657679  411387 start.go:270] post-start completed in 147.260151ms
	I0810 22:36:31.658008  411387 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210810223625-345780
	I0810 22:36:31.695513  411387 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/config.json ...
	I0810 22:36:31.695780  411387 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:36:31.695833  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:36:31.736153  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa Username:docker}
	I0810 22:36:31.817766  411387 command_runner.go:124] > 29%!
	(MISSING)I0810 22:36:31.817831  411387 start.go:129] duration metric: createHost completed in 6.097856132s
	I0810 22:36:31.817847  411387 start.go:80] releasing machines lock for "multinode-20210810223625-345780", held for 6.097977387s
	I0810 22:36:31.817942  411387 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210810223625-345780
	I0810 22:36:31.859356  411387 ssh_runner.go:149] Run: systemctl --version
	I0810 22:36:31.859412  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:36:31.859419  411387 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:36:31.859476  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:36:31.903260  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa Username:docker}
	I0810 22:36:31.909857  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa Username:docker}
	I0810 22:36:32.274341  411387 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0810 22:36:32.274461  411387 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0810 22:36:32.274473  411387 command_runner.go:124] > <H1>302 Moved</H1>
	I0810 22:36:32.274478  411387 command_runner.go:124] > The document has moved
	I0810 22:36:32.274484  411387 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0810 22:36:32.274488  411387 command_runner.go:124] > </BODY></HTML>
	I0810 22:36:32.274587  411387 command_runner.go:124] > systemd 245 (245.4-4ubuntu3.7)
	I0810 22:36:32.274621  411387 command_runner.go:124] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0810 22:36:32.274718  411387 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:36:32.366536  411387 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:36:32.375766  411387 docker.go:153] disabling docker service ...
	I0810 22:36:32.375831  411387 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:36:32.385485  411387 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:36:32.396689  411387 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:36:32.461038  411387 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0810 22:36:32.461122  411387 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:36:32.471559  411387 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0810 22:36:32.531038  411387 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:36:32.540420  411387 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:36:32.554028  411387 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:36:32.554050  411387 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:36:32.554086  411387 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0810 22:36:32.561910  411387 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0810 22:36:32.561944  411387 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0810 22:36:32.570066  411387 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:36:32.575841  411387 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:36:32.576371  411387 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:36:32.576420  411387 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:36:32.583350  411387 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:36:32.589582  411387 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:36:32.646877  411387 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:36:32.656862  411387 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:36:32.656954  411387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:36:32.660231  411387 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0810 22:36:32.660257  411387 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0810 22:36:32.660268  411387 command_runner.go:124] > Device: 36h/54d	Inode: 2094885     Links: 1
	I0810 22:36:32.660279  411387 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:36:32.660285  411387 command_runner.go:124] > Access: 2021-08-10 22:36:31.497592672 +0000
	I0810 22:36:32.660292  411387 command_runner.go:124] > Modify: 2021-08-10 22:36:31.497592672 +0000
	I0810 22:36:32.660297  411387 command_runner.go:124] > Change: 2021-08-10 22:36:31.497592672 +0000
	I0810 22:36:32.660301  411387 command_runner.go:124] >  Birth: -
	I0810 22:36:32.660314  411387 start.go:417] Will wait 60s for crictl version
	I0810 22:36:32.660357  411387 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:36:32.688002  411387 command_runner.go:124] > Version:  0.1.0
	I0810 22:36:32.688024  411387 command_runner.go:124] > RuntimeName:  cri-o
	I0810 22:36:32.688031  411387 command_runner.go:124] > RuntimeVersion:  1.20.3
	I0810 22:36:32.688038  411387 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0810 22:36:32.689757  411387 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0810 22:36:32.689840  411387 ssh_runner.go:149] Run: crio --version
	I0810 22:36:32.749772  411387 command_runner.go:124] > crio version 1.20.3
	I0810 22:36:32.749804  411387 command_runner.go:124] > Version:       1.20.3
	I0810 22:36:32.749817  411387 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0810 22:36:32.749825  411387 command_runner.go:124] > GitTreeState:  clean
	I0810 22:36:32.749835  411387 command_runner.go:124] > BuildDate:     2021-06-03T20:25:45Z
	I0810 22:36:32.749846  411387 command_runner.go:124] > GoVersion:     go1.15.2
	I0810 22:36:32.749851  411387 command_runner.go:124] > Compiler:      gc
	I0810 22:36:32.749856  411387 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:36:32.749860  411387 command_runner.go:124] > Linkmode:      dynamic
	I0810 22:36:32.751111  411387 command_runner.go:124] ! time="2021-08-10T22:36:32Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0810 22:36:32.751205  411387 ssh_runner.go:149] Run: crio --version
	I0810 22:36:32.813489  411387 command_runner.go:124] > crio version 1.20.3
	I0810 22:36:32.813511  411387 command_runner.go:124] > Version:       1.20.3
	I0810 22:36:32.813527  411387 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0810 22:36:32.813531  411387 command_runner.go:124] > GitTreeState:  clean
	I0810 22:36:32.813538  411387 command_runner.go:124] > BuildDate:     2021-06-03T20:25:45Z
	I0810 22:36:32.813542  411387 command_runner.go:124] > GoVersion:     go1.15.2
	I0810 22:36:32.813546  411387 command_runner.go:124] > Compiler:      gc
	I0810 22:36:32.813551  411387 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:36:32.813555  411387 command_runner.go:124] > Linkmode:      dynamic
	I0810 22:36:32.814800  411387 command_runner.go:124] ! time="2021-08-10T22:36:32Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0810 22:36:32.818286  411387 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0810 22:36:32.818358  411387 cli_runner.go:115] Run: docker network inspect multinode-20210810223625-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 22:36:32.856548  411387 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0810 22:36:32.859995  411387 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:36:32.869573  411387 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:36:32.869630  411387 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:36:32.912499  411387 command_runner.go:124] > {
	I0810 22:36:32.912530  411387 command_runner.go:124] >   "images": [
	I0810 22:36:32.912538  411387 command_runner.go:124] >     {
	I0810 22:36:32.912552  411387 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0810 22:36:32.912560  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.912570  411387 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0810 22:36:32.912575  411387 command_runner.go:124] >       ],
	I0810 22:36:32.912579  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.912589  411387 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0810 22:36:32.912621  411387 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0810 22:36:32.912626  411387 command_runner.go:124] >       ],
	I0810 22:36:32.912634  411387 command_runner.go:124] >       "size": "119984626",
	I0810 22:36:32.912639  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.912646  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.912652  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.912659  411387 command_runner.go:124] >     },
	I0810 22:36:32.912663  411387 command_runner.go:124] >     {
	I0810 22:36:32.912669  411387 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0810 22:36:32.912681  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.912690  411387 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0810 22:36:32.912694  411387 command_runner.go:124] >       ],
	I0810 22:36:32.912698  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.912706  411387 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0810 22:36:32.912718  411387 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0810 22:36:32.912724  411387 command_runner.go:124] >       ],
	I0810 22:36:32.912729  411387 command_runner.go:124] >       "size": "228528983",
	I0810 22:36:32.912736  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.912741  411387 command_runner.go:124] >       "username": "nonroot",
	I0810 22:36:32.912754  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.912761  411387 command_runner.go:124] >     },
	I0810 22:36:32.912765  411387 command_runner.go:124] >     {
	I0810 22:36:32.912771  411387 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0810 22:36:32.912779  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.912784  411387 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0810 22:36:32.912791  411387 command_runner.go:124] >       ],
	I0810 22:36:32.912796  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.912809  411387 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0810 22:36:32.912821  411387 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0810 22:36:32.912827  411387 command_runner.go:124] >       ],
	I0810 22:36:32.912832  411387 command_runner.go:124] >       "size": "36950651",
	I0810 22:36:32.912839  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.912843  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.912847  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.912851  411387 command_runner.go:124] >     },
	I0810 22:36:32.912856  411387 command_runner.go:124] >     {
	I0810 22:36:32.912871  411387 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0810 22:36:32.912882  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.912896  411387 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0810 22:36:32.912906  411387 command_runner.go:124] >       ],
	I0810 22:36:32.912913  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.912946  411387 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0810 22:36:32.912961  411387 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0810 22:36:32.912966  411387 command_runner.go:124] >       ],
	I0810 22:36:32.912971  411387 command_runner.go:124] >       "size": "31470524",
	I0810 22:36:32.912980  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.912989  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.912993  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.913002  411387 command_runner.go:124] >     },
	I0810 22:36:32.913006  411387 command_runner.go:124] >     {
	I0810 22:36:32.913013  411387 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0810 22:36:32.913020  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.913026  411387 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0810 22:36:32.913032  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913037  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.913050  411387 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0810 22:36:32.913064  411387 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0810 22:36:32.913068  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913073  411387 command_runner.go:124] >       "size": "42585056",
	I0810 22:36:32.913077  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.913081  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.913085  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.913089  411387 command_runner.go:124] >     },
	I0810 22:36:32.913093  411387 command_runner.go:124] >     {
	I0810 22:36:32.913099  411387 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0810 22:36:32.913107  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.913122  411387 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0810 22:36:32.913126  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913138  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.913146  411387 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0810 22:36:32.913157  411387 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0810 22:36:32.913161  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913166  411387 command_runner.go:124] >       "size": "254662613",
	I0810 22:36:32.913170  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.913174  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.913182  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.913187  411387 command_runner.go:124] >     },
	I0810 22:36:32.913190  411387 command_runner.go:124] >     {
	I0810 22:36:32.913197  411387 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0810 22:36:32.913204  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.913210  411387 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0810 22:36:32.913216  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913221  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.913234  411387 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0810 22:36:32.913245  411387 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0810 22:36:32.913255  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913263  411387 command_runner.go:124] >       "size": "126878961",
	I0810 22:36:32.913275  411387 command_runner.go:124] >       "uid": {
	I0810 22:36:32.913281  411387 command_runner.go:124] >         "value": "0"
	I0810 22:36:32.913288  411387 command_runner.go:124] >       },
	I0810 22:36:32.913299  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.913308  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.913312  411387 command_runner.go:124] >     },
	I0810 22:36:32.913316  411387 command_runner.go:124] >     {
	I0810 22:36:32.913322  411387 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0810 22:36:32.913332  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.913338  411387 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0810 22:36:32.913347  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913352  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.913360  411387 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0810 22:36:32.913372  411387 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0810 22:36:32.913379  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913428  411387 command_runner.go:124] >       "size": "121087578",
	I0810 22:36:32.913439  411387 command_runner.go:124] >       "uid": {
	I0810 22:36:32.913447  411387 command_runner.go:124] >         "value": "0"
	I0810 22:36:32.913453  411387 command_runner.go:124] >       },
	I0810 22:36:32.913470  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.913482  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.913489  411387 command_runner.go:124] >     },
	I0810 22:36:32.913499  411387 command_runner.go:124] >     {
	I0810 22:36:32.913511  411387 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0810 22:36:32.913522  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.913529  411387 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0810 22:36:32.913533  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913537  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.913548  411387 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0810 22:36:32.913564  411387 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0810 22:36:32.913571  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913583  411387 command_runner.go:124] >       "size": "105129702",
	I0810 22:36:32.913594  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.913604  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.913610  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.913620  411387 command_runner.go:124] >     },
	I0810 22:36:32.913625  411387 command_runner.go:124] >     {
	I0810 22:36:32.913632  411387 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0810 22:36:32.913640  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.913653  411387 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0810 22:36:32.913664  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913671  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.913689  411387 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0810 22:36:32.913708  411387 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0810 22:36:32.913718  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913728  411387 command_runner.go:124] >       "size": "51893338",
	I0810 22:36:32.913733  411387 command_runner.go:124] >       "uid": {
	I0810 22:36:32.913737  411387 command_runner.go:124] >         "value": "0"
	I0810 22:36:32.913741  411387 command_runner.go:124] >       },
	I0810 22:36:32.913748  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.913759  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.913765  411387 command_runner.go:124] >     },
	I0810 22:36:32.913775  411387 command_runner.go:124] >     {
	I0810 22:36:32.913787  411387 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0810 22:36:32.913798  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.913810  411387 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0810 22:36:32.913819  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913826  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.913839  411387 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0810 22:36:32.913855  411387 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0810 22:36:32.913866  411387 command_runner.go:124] >       ],
	I0810 22:36:32.913878  411387 command_runner.go:124] >       "size": "689817",
	I0810 22:36:32.913889  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.913900  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.913911  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.913921  411387 command_runner.go:124] >     }
	I0810 22:36:32.913926  411387 command_runner.go:124] >   ]
	I0810 22:36:32.913932  411387 command_runner.go:124] > }
	I0810 22:36:32.914464  411387 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:36:32.914482  411387 crio.go:333] Images already preloaded, skipping extraction
	I0810 22:36:32.914535  411387 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:36:32.938733  411387 command_runner.go:124] > {
	I0810 22:36:32.938758  411387 command_runner.go:124] >   "images": [
	I0810 22:36:32.938764  411387 command_runner.go:124] >     {
	I0810 22:36:32.938777  411387 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0810 22:36:32.938784  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.938794  411387 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0810 22:36:32.938799  411387 command_runner.go:124] >       ],
	I0810 22:36:32.938805  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.938820  411387 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0810 22:36:32.938833  411387 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0810 22:36:32.938838  411387 command_runner.go:124] >       ],
	I0810 22:36:32.938847  411387 command_runner.go:124] >       "size": "119984626",
	I0810 22:36:32.938854  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.938859  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.938869  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.938876  411387 command_runner.go:124] >     },
	I0810 22:36:32.938881  411387 command_runner.go:124] >     {
	I0810 22:36:32.938894  411387 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0810 22:36:32.938904  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.938919  411387 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0810 22:36:32.938926  411387 command_runner.go:124] >       ],
	I0810 22:36:32.938933  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.938943  411387 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0810 22:36:32.938954  411387 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0810 22:36:32.938964  411387 command_runner.go:124] >       ],
	I0810 22:36:32.938973  411387 command_runner.go:124] >       "size": "228528983",
	I0810 22:36:32.938990  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.938997  411387 command_runner.go:124] >       "username": "nonroot",
	I0810 22:36:32.939005  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.939010  411387 command_runner.go:124] >     },
	I0810 22:36:32.939015  411387 command_runner.go:124] >     {
	I0810 22:36:32.939025  411387 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0810 22:36:32.939029  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.939035  411387 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0810 22:36:32.939038  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939043  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.939057  411387 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0810 22:36:32.939074  411387 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0810 22:36:32.939083  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939092  411387 command_runner.go:124] >       "size": "36950651",
	I0810 22:36:32.939102  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.939112  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.939119  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.939123  411387 command_runner.go:124] >     },
	I0810 22:36:32.939132  411387 command_runner.go:124] >     {
	I0810 22:36:32.939145  411387 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0810 22:36:32.939155  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.939166  411387 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0810 22:36:32.939175  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939185  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.939200  411387 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0810 22:36:32.939213  411387 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0810 22:36:32.939220  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939227  411387 command_runner.go:124] >       "size": "31470524",
	I0810 22:36:32.939248  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.939258  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.939267  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.939275  411387 command_runner.go:124] >     },
	I0810 22:36:32.939286  411387 command_runner.go:124] >     {
	I0810 22:36:32.939301  411387 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0810 22:36:32.939309  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.939316  411387 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0810 22:36:32.939324  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939333  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.939350  411387 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0810 22:36:32.939365  411387 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0810 22:36:32.939374  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939384  411387 command_runner.go:124] >       "size": "42585056",
	I0810 22:36:32.939393  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.939399  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.939407  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.939415  411387 command_runner.go:124] >     },
	I0810 22:36:32.939424  411387 command_runner.go:124] >     {
	I0810 22:36:32.939437  411387 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0810 22:36:32.939447  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.939457  411387 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0810 22:36:32.939465  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939477  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.939492  411387 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0810 22:36:32.939504  411387 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0810 22:36:32.939511  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939518  411387 command_runner.go:124] >       "size": "254662613",
	I0810 22:36:32.939528  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.939534  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.939543  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.939551  411387 command_runner.go:124] >     },
	I0810 22:36:32.939557  411387 command_runner.go:124] >     {
	I0810 22:36:32.939570  411387 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0810 22:36:32.939580  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.939590  411387 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0810 22:36:32.939597  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939601  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.939616  411387 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0810 22:36:32.939631  411387 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0810 22:36:32.939640  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939651  411387 command_runner.go:124] >       "size": "126878961",
	I0810 22:36:32.939657  411387 command_runner.go:124] >       "uid": {
	I0810 22:36:32.939668  411387 command_runner.go:124] >         "value": "0"
	I0810 22:36:32.939679  411387 command_runner.go:124] >       },
	I0810 22:36:32.939688  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.939695  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.939699  411387 command_runner.go:124] >     },
	I0810 22:36:32.939707  411387 command_runner.go:124] >     {
	I0810 22:36:32.939720  411387 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0810 22:36:32.939727  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.939738  411387 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0810 22:36:32.939747  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939755  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.939771  411387 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0810 22:36:32.939785  411387 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0810 22:36:32.939792  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939801  411387 command_runner.go:124] >       "size": "121087578",
	I0810 22:36:32.939810  411387 command_runner.go:124] >       "uid": {
	I0810 22:36:32.939820  411387 command_runner.go:124] >         "value": "0"
	I0810 22:36:32.939825  411387 command_runner.go:124] >       },
	I0810 22:36:32.939846  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.939855  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.939863  411387 command_runner.go:124] >     },
	I0810 22:36:32.939873  411387 command_runner.go:124] >     {
	I0810 22:36:32.939886  411387 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0810 22:36:32.939893  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.939900  411387 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0810 22:36:32.939908  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939915  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.939930  411387 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0810 22:36:32.939944  411387 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0810 22:36:32.939953  411387 command_runner.go:124] >       ],
	I0810 22:36:32.939962  411387 command_runner.go:124] >       "size": "105129702",
	I0810 22:36:32.939970  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.939975  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.939983  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.939991  411387 command_runner.go:124] >     },
	I0810 22:36:32.939996  411387 command_runner.go:124] >     {
	I0810 22:36:32.940009  411387 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0810 22:36:32.940019  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.940030  411387 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0810 22:36:32.940040  411387 command_runner.go:124] >       ],
	I0810 22:36:32.940049  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.940058  411387 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0810 22:36:32.940082  411387 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0810 22:36:32.940092  411387 command_runner.go:124] >       ],
	I0810 22:36:32.940098  411387 command_runner.go:124] >       "size": "51893338",
	I0810 22:36:32.940107  411387 command_runner.go:124] >       "uid": {
	I0810 22:36:32.940123  411387 command_runner.go:124] >         "value": "0"
	I0810 22:36:32.940132  411387 command_runner.go:124] >       },
	I0810 22:36:32.940142  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.940151  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.940159  411387 command_runner.go:124] >     },
	I0810 22:36:32.940162  411387 command_runner.go:124] >     {
	I0810 22:36:32.940170  411387 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0810 22:36:32.940179  411387 command_runner.go:124] >       "repoTags": [
	I0810 22:36:32.940190  411387 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0810 22:36:32.940195  411387 command_runner.go:124] >       ],
	I0810 22:36:32.940204  411387 command_runner.go:124] >       "repoDigests": [
	I0810 22:36:32.940219  411387 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0810 22:36:32.940234  411387 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0810 22:36:32.940242  411387 command_runner.go:124] >       ],
	I0810 22:36:32.940252  411387 command_runner.go:124] >       "size": "689817",
	I0810 22:36:32.940260  411387 command_runner.go:124] >       "uid": null,
	I0810 22:36:32.940270  411387 command_runner.go:124] >       "username": "",
	I0810 22:36:32.940280  411387 command_runner.go:124] >       "spec": null
	I0810 22:36:32.940286  411387 command_runner.go:124] >     }
	I0810 22:36:32.940294  411387 command_runner.go:124] >   ]
	I0810 22:36:32.940299  411387 command_runner.go:124] > }
	I0810 22:36:32.940472  411387 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:36:32.940487  411387 cache_images.go:74] Images are preloaded, skipping loading
	I0810 22:36:32.940562  411387 ssh_runner.go:149] Run: crio config
	I0810 22:36:33.007945  411387 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0810 22:36:33.007979  411387 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0810 22:36:33.007990  411387 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0810 22:36:33.007995  411387 command_runner.go:124] > #
	I0810 22:36:33.008012  411387 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0810 22:36:33.008027  411387 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0810 22:36:33.008040  411387 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0810 22:36:33.008055  411387 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0810 22:36:33.008064  411387 command_runner.go:124] > # reload'.
	I0810 22:36:33.008079  411387 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0810 22:36:33.008091  411387 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0810 22:36:33.008103  411387 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0810 22:36:33.008115  411387 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0810 22:36:33.008120  411387 command_runner.go:124] > [crio]
	I0810 22:36:33.008135  411387 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0810 22:36:33.008153  411387 command_runner.go:124] > # containers images, in this directory.
	I0810 22:36:33.008164  411387 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0810 22:36:33.008182  411387 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0810 22:36:33.008194  411387 command_runner.go:124] > #runroot = "/run/containers/storage"
	I0810 22:36:33.008206  411387 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0810 22:36:33.008219  411387 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0810 22:36:33.008230  411387 command_runner.go:124] > #storage_driver = "overlay"
	I0810 22:36:33.008239  411387 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0810 22:36:33.008252  411387 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0810 22:36:33.008262  411387 command_runner.go:124] > #storage_option = [
	I0810 22:36:33.008269  411387 command_runner.go:124] > #	"overlay.mountopt=nodev",
	I0810 22:36:33.008277  411387 command_runner.go:124] > #]
	I0810 22:36:33.008288  411387 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0810 22:36:33.008300  411387 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0810 22:36:33.008310  411387 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0810 22:36:33.008319  411387 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0810 22:36:33.008332  411387 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0810 22:36:33.008339  411387 command_runner.go:124] > # always happen on a node reboot
	I0810 22:36:33.008349  411387 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0810 22:36:33.008362  411387 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0810 22:36:33.008379  411387 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0810 22:36:33.008391  411387 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0810 22:36:33.008548  411387 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0810 22:36:33.008570  411387 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0810 22:36:33.008576  411387 command_runner.go:124] > [crio.api]
	I0810 22:36:33.008597  411387 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0810 22:36:33.008608  411387 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0810 22:36:33.008616  411387 command_runner.go:124] > # IP address on which the stream server will listen.
	I0810 22:36:33.008629  411387 command_runner.go:124] > stream_address = "127.0.0.1"
	I0810 22:36:33.008641  411387 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0810 22:36:33.008646  411387 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0810 22:36:33.008652  411387 command_runner.go:124] > stream_port = "0"
	I0810 22:36:33.008659  411387 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0810 22:36:33.008665  411387 command_runner.go:124] > stream_enable_tls = false
	I0810 22:36:33.008672  411387 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0810 22:36:33.008679  411387 command_runner.go:124] > stream_idle_timeout = ""
	I0810 22:36:33.008686  411387 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0810 22:36:33.008695  411387 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0810 22:36:33.008701  411387 command_runner.go:124] > # minutes.
	I0810 22:36:33.008705  411387 command_runner.go:124] > stream_tls_cert = ""
	I0810 22:36:33.008717  411387 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0810 22:36:33.008726  411387 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0810 22:36:33.008731  411387 command_runner.go:124] > stream_tls_key = ""
	I0810 22:36:33.008741  411387 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0810 22:36:33.008749  411387 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0810 22:36:33.008766  411387 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0810 22:36:33.008770  411387 command_runner.go:124] > stream_tls_ca = ""
	I0810 22:36:33.008779  411387 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:36:33.008784  411387 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0810 22:36:33.008791  411387 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:36:33.008796  411387 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0810 22:36:33.008802  411387 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0810 22:36:33.008808  411387 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0810 22:36:33.008812  411387 command_runner.go:124] > [crio.runtime]
	I0810 22:36:33.008818  411387 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0810 22:36:33.008823  411387 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0810 22:36:33.008827  411387 command_runner.go:124] > # "nofile=1024:2048"
	I0810 22:36:33.008834  411387 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0810 22:36:33.008844  411387 command_runner.go:124] > #default_ulimits = [
	I0810 22:36:33.008852  411387 command_runner.go:124] > #]
	I0810 22:36:33.008858  411387 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0810 22:36:33.008865  411387 command_runner.go:124] > no_pivot = false
	I0810 22:36:33.008870  411387 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0810 22:36:33.008881  411387 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0810 22:36:33.008889  411387 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0810 22:36:33.008895  411387 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0810 22:36:33.008902  411387 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0810 22:36:33.008905  411387 command_runner.go:124] > conmon = ""
	I0810 22:36:33.008911  411387 command_runner.go:124] > # Cgroup setting for conmon
	I0810 22:36:33.008935  411387 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0810 22:36:33.008949  411387 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0810 22:36:33.008957  411387 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0810 22:36:33.008961  411387 command_runner.go:124] > conmon_env = [
	I0810 22:36:33.008967  411387 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0810 22:36:33.008973  411387 command_runner.go:124] > ]
	I0810 22:36:33.008980  411387 command_runner.go:124] > # Additional environment variables to set for all the
	I0810 22:36:33.008985  411387 command_runner.go:124] > # containers. These are overridden if set in the
	I0810 22:36:33.008994  411387 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0810 22:36:33.008998  411387 command_runner.go:124] > default_env = [
	I0810 22:36:33.009004  411387 command_runner.go:124] > ]
	I0810 22:36:33.009012  411387 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0810 22:36:33.009016  411387 command_runner.go:124] > selinux = false
	I0810 22:36:33.009023  411387 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0810 22:36:33.009033  411387 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0810 22:36:33.009042  411387 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0810 22:36:33.009046  411387 command_runner.go:124] > seccomp_profile = ""
	I0810 22:36:33.009055  411387 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0810 22:36:33.009068  411387 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0810 22:36:33.009081  411387 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0810 22:36:33.009091  411387 command_runner.go:124] > # which might increase security.
	I0810 22:36:33.009101  411387 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0810 22:36:33.009111  411387 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0810 22:36:33.009120  411387 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0810 22:36:33.009129  411387 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0810 22:36:33.009138  411387 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0810 22:36:33.009146  411387 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:36:33.009150  411387 command_runner.go:124] > apparmor_profile = "crio-default"
	I0810 22:36:33.009160  411387 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0810 22:36:33.009168  411387 command_runner.go:124] > # irqbalance daemon.
	I0810 22:36:33.009175  411387 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0810 22:36:33.009181  411387 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0810 22:36:33.009187  411387 command_runner.go:124] > cgroup_manager = "systemd"
	I0810 22:36:33.009193  411387 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0810 22:36:33.009201  411387 command_runner.go:124] > separate_pull_cgroup = ""
	I0810 22:36:33.009239  411387 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0810 22:36:33.009253  411387 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0810 22:36:33.009259  411387 command_runner.go:124] > # will be added.
	I0810 22:36:33.009264  411387 command_runner.go:124] > default_capabilities = [
	I0810 22:36:33.009269  411387 command_runner.go:124] > 	"CHOWN",
	I0810 22:36:33.009275  411387 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0810 22:36:33.009281  411387 command_runner.go:124] > 	"FSETID",
	I0810 22:36:33.009286  411387 command_runner.go:124] > 	"FOWNER",
	I0810 22:36:33.009292  411387 command_runner.go:124] > 	"SETGID",
	I0810 22:36:33.009300  411387 command_runner.go:124] > 	"SETUID",
	I0810 22:36:33.009306  411387 command_runner.go:124] > 	"SETPCAP",
	I0810 22:36:33.009315  411387 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0810 22:36:33.009321  411387 command_runner.go:124] > 	"KILL",
	I0810 22:36:33.009329  411387 command_runner.go:124] > ]
	I0810 22:36:33.009340  411387 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0810 22:36:33.009353  411387 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:36:33.009361  411387 command_runner.go:124] > default_sysctls = [
	I0810 22:36:33.009371  411387 command_runner.go:124] > ]
	I0810 22:36:33.009382  411387 command_runner.go:124] > # List of additional devices. specified as
	I0810 22:36:33.009398  411387 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0810 22:36:33.009409  411387 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0810 22:36:33.009422  411387 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:36:33.009431  411387 command_runner.go:124] > additional_devices = [
	I0810 22:36:33.009439  411387 command_runner.go:124] > ]
	I0810 22:36:33.009450  411387 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0810 22:36:33.009463  411387 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0810 22:36:33.009471  411387 command_runner.go:124] > hooks_dir = [
	I0810 22:36:33.009479  411387 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0810 22:36:33.009488  411387 command_runner.go:124] > ]
	I0810 22:36:33.009499  411387 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0810 22:36:33.009513  411387 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0810 22:36:33.009524  411387 command_runner.go:124] > # its default mounts from the following two files:
	I0810 22:36:33.009531  411387 command_runner.go:124] > #
	I0810 22:36:33.009545  411387 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0810 22:36:33.009558  411387 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0810 22:36:33.009573  411387 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0810 22:36:33.009581  411387 command_runner.go:124] > #
	I0810 22:36:33.009591  411387 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0810 22:36:33.009604  411387 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0810 22:36:33.009615  411387 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0810 22:36:33.009626  411387 command_runner.go:124] > #      only add mounts it finds in this file.
	I0810 22:36:33.009633  411387 command_runner.go:124] > #
	I0810 22:36:33.009641  411387 command_runner.go:124] > #default_mounts_file = ""
	I0810 22:36:33.009652  411387 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0810 22:36:33.009661  411387 command_runner.go:124] > pids_limit = 1024
	I0810 22:36:33.009675  411387 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0810 22:36:33.009687  411387 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0810 22:36:33.009700  411387 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0810 22:36:33.009707  411387 command_runner.go:124] > # limit is never exceeded.
	I0810 22:36:33.009715  411387 command_runner.go:124] > log_size_max = -1
	I0810 22:36:33.009741  411387 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0810 22:36:33.009756  411387 command_runner.go:124] > log_to_journald = false
	I0810 22:36:33.009767  411387 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0810 22:36:33.009779  411387 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0810 22:36:33.009790  411387 command_runner.go:124] > # Path to directory for container attach sockets.
	I0810 22:36:33.009801  411387 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0810 22:36:33.009815  411387 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0810 22:36:33.009825  411387 command_runner.go:124] > bind_mount_prefix = ""
	I0810 22:36:33.009841  411387 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0810 22:36:33.009849  411387 command_runner.go:124] > read_only = false
	I0810 22:36:33.009860  411387 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0810 22:36:33.009873  411387 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0810 22:36:33.009880  411387 command_runner.go:124] > # live configuration reload.
	I0810 22:36:33.009889  411387 command_runner.go:124] > log_level = "info"
	I0810 22:36:33.009899  411387 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0810 22:36:33.009913  411387 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:36:33.009922  411387 command_runner.go:124] > log_filter = ""
	I0810 22:36:33.009933  411387 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0810 22:36:33.009946  411387 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0810 22:36:33.009954  411387 command_runner.go:124] > # separated by comma.
	I0810 22:36:33.009959  411387 command_runner.go:124] > uid_mappings = ""
	I0810 22:36:33.009967  411387 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0810 22:36:33.009977  411387 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0810 22:36:33.009982  411387 command_runner.go:124] > # separated by comma.
	I0810 22:36:33.009988  411387 command_runner.go:124] > gid_mappings = ""
	I0810 22:36:33.009998  411387 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0810 22:36:33.010012  411387 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0810 22:36:33.010025  411387 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0810 22:36:33.010036  411387 command_runner.go:124] > ctr_stop_timeout = 30
	I0810 22:36:33.010046  411387 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0810 22:36:33.010058  411387 command_runner.go:124] > # and manage their lifecycle.
	I0810 22:36:33.010071  411387 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0810 22:36:33.010081  411387 command_runner.go:124] > manage_ns_lifecycle = true
	I0810 22:36:33.010092  411387 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0810 22:36:33.010104  411387 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0810 22:36:33.010115  411387 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0810 22:36:33.010126  411387 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0810 22:36:33.010134  411387 command_runner.go:124] > drop_infra_ctr = false
	I0810 22:36:33.010145  411387 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0810 22:36:33.010163  411387 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0810 22:36:33.010178  411387 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0810 22:36:33.010187  411387 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0810 22:36:33.010197  411387 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0810 22:36:33.010208  411387 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0810 22:36:33.010217  411387 command_runner.go:124] > namespaces_dir = "/var/run"
	I0810 22:36:33.010234  411387 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0810 22:36:33.010243  411387 command_runner.go:124] > pinns_path = ""
	I0810 22:36:33.010255  411387 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0810 22:36:33.010270  411387 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0810 22:36:33.010284  411387 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0810 22:36:33.010293  411387 command_runner.go:124] > default_runtime = "runc"
	I0810 22:36:33.010306  411387 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0810 22:36:33.010319  411387 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0810 22:36:33.010332  411387 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0810 22:36:33.010345  411387 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0810 22:36:33.010352  411387 command_runner.go:124] > #
	I0810 22:36:33.010370  411387 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0810 22:36:33.010380  411387 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0810 22:36:33.010387  411387 command_runner.go:124] > #  runtime_type = "oci"
	I0810 22:36:33.010397  411387 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0810 22:36:33.010408  411387 command_runner.go:124] > #  privileged_without_host_devices = false
	I0810 22:36:33.010419  411387 command_runner.go:124] > #  allowed_annotations = []
	I0810 22:36:33.010428  411387 command_runner.go:124] > # Where:
	I0810 22:36:33.010437  411387 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0810 22:36:33.010450  411387 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0810 22:36:33.010463  411387 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0810 22:36:33.010477  411387 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0810 22:36:33.010486  411387 command_runner.go:124] > #   in $PATH.
	I0810 22:36:33.010500  411387 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0810 22:36:33.010511  411387 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0810 22:36:33.010524  411387 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0810 22:36:33.010532  411387 command_runner.go:124] > #   state.
	I0810 22:36:33.010542  411387 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0810 22:36:33.010553  411387 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0810 22:36:33.010565  411387 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0810 22:36:33.010579  411387 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0810 22:36:33.010589  411387 command_runner.go:124] > #   The currently recognized values are:
	I0810 22:36:33.010603  411387 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0810 22:36:33.010616  411387 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0810 22:36:33.010629  411387 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0810 22:36:33.010638  411387 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0810 22:36:33.010646  411387 command_runner.go:124] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0810 22:36:33.010659  411387 command_runner.go:124] > runtime_type = "oci"
	I0810 22:36:33.010669  411387 command_runner.go:124] > runtime_root = "/run/runc"
	I0810 22:36:33.010682  411387 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0810 22:36:33.010693  411387 command_runner.go:124] > # running containers
	I0810 22:36:33.010704  411387 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0810 22:36:33.010717  411387 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0810 22:36:33.010730  411387 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0810 22:36:33.010742  411387 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0810 22:36:33.010753  411387 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0810 22:36:33.010764  411387 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0810 22:36:33.010774  411387 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0810 22:36:33.010784  411387 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0810 22:36:33.010794  411387 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0810 22:36:33.010804  411387 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0810 22:36:33.010817  411387 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0810 22:36:33.010824  411387 command_runner.go:124] > #
	I0810 22:36:33.010834  411387 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0810 22:36:33.010850  411387 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0810 22:36:33.010865  411387 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0810 22:36:33.010879  411387 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0810 22:36:33.010892  411387 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0810 22:36:33.010901  411387 command_runner.go:124] > [crio.image]
	I0810 22:36:33.010914  411387 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0810 22:36:33.010924  411387 command_runner.go:124] > default_transport = "docker://"
	I0810 22:36:33.010935  411387 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0810 22:36:33.010948  411387 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:36:33.010957  411387 command_runner.go:124] > global_auth_file = ""
	I0810 22:36:33.010968  411387 command_runner.go:124] > # The image used to instantiate infra containers.
	I0810 22:36:33.010979  411387 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:36:33.010989  411387 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0810 22:36:33.011004  411387 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0810 22:36:33.011016  411387 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:36:33.011028  411387 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:36:33.011038  411387 command_runner.go:124] > pause_image_auth_file = ""
	I0810 22:36:33.011050  411387 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0810 22:36:33.011063  411387 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0810 22:36:33.011076  411387 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0810 22:36:33.011091  411387 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0810 22:36:33.011102  411387 command_runner.go:124] > pause_command = "/pause"
	I0810 22:36:33.011115  411387 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0810 22:36:33.011126  411387 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0810 22:36:33.011139  411387 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0810 22:36:33.011152  411387 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0810 22:36:33.011163  411387 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0810 22:36:33.011172  411387 command_runner.go:124] > signature_policy = ""
	I0810 22:36:33.011183  411387 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0810 22:36:33.011196  411387 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0810 22:36:33.011204  411387 command_runner.go:124] > # changing them here.
	I0810 22:36:33.011211  411387 command_runner.go:124] > #insecure_registries = "[]"
	I0810 22:36:33.011225  411387 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0810 22:36:33.011236  411387 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0810 22:36:33.011245  411387 command_runner.go:124] > image_volumes = "mkdir"
	I0810 22:36:33.011257  411387 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0810 22:36:33.011270  411387 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0810 22:36:33.011282  411387 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0810 22:36:33.011295  411387 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0810 22:36:33.011304  411387 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0810 22:36:33.011312  411387 command_runner.go:124] > #registries = [
	I0810 22:36:33.011321  411387 command_runner.go:124] > # ]
	I0810 22:36:33.011334  411387 command_runner.go:124] > # Temporary directory to use for storing big files
	I0810 22:36:33.011344  411387 command_runner.go:124] > big_files_temporary_dir = ""
	I0810 22:36:33.011354  411387 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0810 22:36:33.011363  411387 command_runner.go:124] > # CNI plugins.
	I0810 22:36:33.011372  411387 command_runner.go:124] > [crio.network]
	I0810 22:36:33.011384  411387 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0810 22:36:33.011395  411387 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0810 22:36:33.011405  411387 command_runner.go:124] > # cni_default_network = "kindnet"
	I0810 22:36:33.011417  411387 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0810 22:36:33.011427  411387 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0810 22:36:33.011439  411387 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0810 22:36:33.011447  411387 command_runner.go:124] > plugin_dirs = [
	I0810 22:36:33.011456  411387 command_runner.go:124] > 	"/opt/cni/bin/",
	I0810 22:36:33.011463  411387 command_runner.go:124] > ]
	I0810 22:36:33.011476  411387 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0810 22:36:33.011484  411387 command_runner.go:124] > [crio.metrics]
	I0810 22:36:33.011492  411387 command_runner.go:124] > # Globally enable or disable metrics support.
	I0810 22:36:33.011501  411387 command_runner.go:124] > enable_metrics = false
	I0810 22:36:33.011510  411387 command_runner.go:124] > # The port on which the metrics server will listen.
	I0810 22:36:33.011522  411387 command_runner.go:124] > metrics_port = 9090
	I0810 22:36:33.011551  411387 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0810 22:36:33.011562  411387 command_runner.go:124] > metrics_socket = ""
	I0810 22:36:33.011613  411387 command_runner.go:124] ! time="2021-08-10T22:36:33Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0810 22:36:33.011635  411387 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0810 22:36:33.011748  411387 cni.go:93] Creating CNI manager for ""
	I0810 22:36:33.011761  411387 cni.go:154] 1 nodes found, recommending kindnet
	I0810 22:36:33.011772  411387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:36:33.011789  411387 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210810223625-345780 NodeName:multinode-20210810223625-345780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:36:33.011959  411387 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210810223625-345780"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:36:33.012080  411387 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-20210810223625-345780 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223625-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:36:33.012146  411387 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0810 22:36:33.019348  411387 command_runner.go:124] > kubeadm
	I0810 22:36:33.019381  411387 command_runner.go:124] > kubectl
	I0810 22:36:33.019387  411387 command_runner.go:124] > kubelet
	I0810 22:36:33.019956  411387 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 22:36:33.020029  411387 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0810 22:36:33.027161  411387 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0810 22:36:33.039802  411387 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:36:33.052168  411387 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0810 22:36:33.063974  411387 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0810 22:36:33.066706  411387 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:36:33.075393  411387 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780 for IP: 192.168.49.2
	I0810 22:36:33.075446  411387 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:36:33.075468  411387 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:36:33.075527  411387 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/client.key
	I0810 22:36:33.075537  411387 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/client.crt with IP's: []
	I0810 22:36:33.167584  411387 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/client.crt ...
	I0810 22:36:33.167621  411387 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/client.crt: {Name:mk557d2fb6ecdbccfaa215057adbcd11aa7fec17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:36:33.167844  411387 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/client.key ...
	I0810 22:36:33.167859  411387 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/client.key: {Name:mkc8472e3fe80c29c61376b535b02a33df85b42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:36:33.167949  411387 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.key.dd3b5fb2
	I0810 22:36:33.167963  411387 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0810 22:36:33.389428  411387 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.crt.dd3b5fb2 ...
	I0810 22:36:33.389482  411387 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.crt.dd3b5fb2: {Name:mkaf645ba0c01626b8876edfc26417d1e909334e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:36:33.389703  411387 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.key.dd3b5fb2 ...
	I0810 22:36:33.389719  411387 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.key.dd3b5fb2: {Name:mkf30e18bced3296d08e5ec21d7cf778cd6c4405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:36:33.389806  411387 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.crt
	I0810 22:36:33.389877  411387 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.key
	I0810 22:36:33.389932  411387 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/proxy-client.key
	I0810 22:36:33.389940  411387 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/proxy-client.crt with IP's: []
	I0810 22:36:33.539916  411387 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/proxy-client.crt ...
	I0810 22:36:33.539958  411387 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/proxy-client.crt: {Name:mk568f28ceff3028e523226cd1f10acbe23dcd44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:36:33.540182  411387 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/proxy-client.key ...
	I0810 22:36:33.540197  411387 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/proxy-client.key: {Name:mk3a2bcc04a96f1e966d88b9a32a283ebe30415e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:36:33.540279  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0810 22:36:33.540300  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0810 22:36:33.540309  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0810 22:36:33.540321  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0810 22:36:33.540331  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0810 22:36:33.540345  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0810 22:36:33.540355  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0810 22:36:33.540367  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0810 22:36:33.540418  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem (1338 bytes)
	W0810 22:36:33.540456  411387 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780_empty.pem, impossibly tiny 0 bytes
	I0810 22:36:33.540467  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0810 22:36:33.540494  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:36:33.540520  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:36:33.540539  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:36:33.540577  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:36:33.540610  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:36:33.540624  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem -> /usr/share/ca-certificates/345780.pem
	I0810 22:36:33.540637  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> /usr/share/ca-certificates/3457802.pem
	I0810 22:36:33.541601  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0810 22:36:33.560070  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0810 22:36:33.577183  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0810 22:36:33.593897  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0810 22:36:33.611109  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:36:33.627609  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:36:33.644335  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:36:33.660624  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0810 22:36:33.676554  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:36:33.693147  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem --> /usr/share/ca-certificates/345780.pem (1338 bytes)
	I0810 22:36:33.709788  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /usr/share/ca-certificates/3457802.pem (1708 bytes)
	I0810 22:36:33.726542  411387 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0810 22:36:33.738654  411387 ssh_runner.go:149] Run: openssl version
	I0810 22:36:33.743200  411387 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0810 22:36:33.743420  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:36:33.750300  411387 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:36:33.753111  411387 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 10 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:36:33.753258  411387 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:36:33.753308  411387 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:36:33.757610  411387 command_runner.go:124] > b5213941
	I0810 22:36:33.757855  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:36:33.764724  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/345780.pem && ln -fs /usr/share/ca-certificates/345780.pem /etc/ssl/certs/345780.pem"
	I0810 22:36:33.771573  411387 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/345780.pem
	I0810 22:36:33.774478  411387 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 10 22:29 /usr/share/ca-certificates/345780.pem
	I0810 22:36:33.774555  411387 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:29 /usr/share/ca-certificates/345780.pem
	I0810 22:36:33.774607  411387 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/345780.pem
	I0810 22:36:33.779131  411387 command_runner.go:124] > 51391683
	I0810 22:36:33.779194  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/345780.pem /etc/ssl/certs/51391683.0"
	I0810 22:36:33.786095  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3457802.pem && ln -fs /usr/share/ca-certificates/3457802.pem /etc/ssl/certs/3457802.pem"
	I0810 22:36:33.793706  411387 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3457802.pem
	I0810 22:36:33.796801  411387 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 10 22:29 /usr/share/ca-certificates/3457802.pem
	I0810 22:36:33.796906  411387 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:29 /usr/share/ca-certificates/3457802.pem
	I0810 22:36:33.796970  411387 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3457802.pem
	I0810 22:36:33.801593  411387 command_runner.go:124] > 3ec20f2e
	I0810 22:36:33.801777  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3457802.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 22:36:33.809355  411387 kubeadm.go:390] StartCluster: {Name:multinode-20210810223625-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223625-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0810 22:36:33.809442  411387 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0810 22:36:33.809511  411387 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:36:33.833078  411387 cri.go:76] found id: ""
	I0810 22:36:33.833133  411387 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0810 22:36:33.840188  411387 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0810 22:36:33.840217  411387 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0810 22:36:33.840228  411387 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0810 22:36:33.840283  411387 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0810 22:36:33.846656  411387 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0810 22:36:33.846705  411387 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 22:36:33.852985  411387 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0810 22:36:33.853014  411387 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0810 22:36:33.853025  411387 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0810 22:36:33.853034  411387 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 22:36:33.853063  411387 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 22:36:33.853095  411387 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0810 22:36:33.908618  411387 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0810 22:36:33.908689  411387 command_runner.go:124] > [preflight] Running pre-flight checks
	I0810 22:36:33.936000  411387 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0810 22:36:33.936091  411387 command_runner.go:124] > KERNEL_VERSION: 4.9.0-16-amd64
	I0810 22:36:33.936150  411387 command_runner.go:124] > OS: Linux
	I0810 22:36:33.936197  411387 command_runner.go:124] > CGROUPS_CPU: enabled
	I0810 22:36:33.936247  411387 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0810 22:36:33.936291  411387 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0810 22:36:33.936341  411387 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0810 22:36:33.936384  411387 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0810 22:36:33.936429  411387 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0810 22:36:33.936473  411387 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0810 22:36:33.936527  411387 command_runner.go:124] > CGROUPS_HUGETLB: missing
	I0810 22:36:34.005688  411387 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0810 22:36:34.005813  411387 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0810 22:36:34.005902  411387 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0810 22:36:34.140156  411387 out.go:204]   - Generating certificates and keys ...
	I0810 22:36:34.136238  411387 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0810 22:36:34.140327  411387 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0810 22:36:34.140432  411387 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0810 22:36:34.309537  411387 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0810 22:36:34.518184  411387 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0810 22:36:34.910907  411387 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0810 22:36:35.098800  411387 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0810 22:36:35.215978  411387 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0810 22:36:35.216131  411387 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210810223625-345780] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0810 22:36:35.283777  411387 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0810 22:36:35.283973  411387 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210810223625-345780] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0810 22:36:35.365181  411387 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0810 22:36:35.651312  411387 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0810 22:36:35.984856  411387 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0810 22:36:35.984966  411387 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0810 22:36:36.119173  411387 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0810 22:36:36.196613  411387 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0810 22:36:36.470799  411387 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0810 22:36:36.592400  411387 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0810 22:36:36.599538  411387 command_runner.go:124] > [kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5]
	I0810 22:36:36.599703  411387 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0810 22:36:36.600582  411387 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0810 22:36:36.600657  411387 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0810 22:36:36.662869  411387 out.go:204]   - Booting up control plane ...
	I0810 22:36:36.660262  411387 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0810 22:36:36.663020  411387 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0810 22:36:36.667217  411387 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0810 22:36:36.668115  411387 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0810 22:36:36.668782  411387 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0810 22:36:36.670688  411387 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0810 22:36:50.173420  411387 command_runner.go:124] > [apiclient] All control plane components are healthy after 13.502660 seconds
	I0810 22:36:50.173573  411387 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0810 22:36:50.183551  411387 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0810 22:36:50.701703  411387 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0810 22:36:50.701950  411387 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210810223625-345780 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0810 22:36:51.211218  411387 out.go:204]   - Configuring RBAC rules ...
	I0810 22:36:51.209423  411387 command_runner.go:124] > [bootstrap-token] Using token: 7pbn5h.tzw9oa5bdctxxun4
	I0810 22:36:51.211429  411387 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0810 22:36:51.214853  411387 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0810 22:36:51.221715  411387 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0810 22:36:51.224020  411387 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0810 22:36:51.226838  411387 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0810 22:36:51.230274  411387 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0810 22:36:51.237106  411387 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0810 22:36:51.411414  411387 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0810 22:36:51.618728  411387 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0810 22:36:51.619735  411387 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0810 22:36:51.619860  411387 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0810 22:36:51.619902  411387 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0810 22:36:51.619984  411387 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0810 22:36:51.620054  411387 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0810 22:36:51.620140  411387 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0810 22:36:51.620211  411387 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0810 22:36:51.620378  411387 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0810 22:36:51.620447  411387 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0810 22:36:51.620509  411387 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0810 22:36:51.620628  411387 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0810 22:36:51.620745  411387 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0810 22:36:51.620868  411387 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token 7pbn5h.tzw9oa5bdctxxun4 \
	I0810 22:36:51.621052  411387 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:95b70b0e3b8140822120816c1284056e6e385d941feb1ffb25a07e039168adfc \
	I0810 22:36:51.621089  411387 command_runner.go:124] > 	--control-plane 
	I0810 22:36:51.621200  411387 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0810 22:36:51.621308  411387 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token 7pbn5h.tzw9oa5bdctxxun4 \
	I0810 22:36:51.621435  411387 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:95b70b0e3b8140822120816c1284056e6e385d941feb1ffb25a07e039168adfc 
	I0810 22:36:51.622529  411387 command_runner.go:124] ! 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	I0810 22:36:51.622623  411387 command_runner.go:124] ! 	[WARNING SystemVerification]: missing optional cgroups: hugetlb
	I0810 22:36:51.622820  411387 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
	I0810 22:36:51.622930  411387 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0810 22:36:51.622971  411387 cni.go:93] Creating CNI manager for ""
	I0810 22:36:51.622982  411387 cni.go:154] 1 nodes found, recommending kindnet
	I0810 22:36:51.625141  411387 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0810 22:36:51.625209  411387 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0810 22:36:51.628657  411387 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0810 22:36:51.628678  411387 command_runner.go:124] >   Size: 2738488   	Blocks: 5352       IO Block: 4096   regular file
	I0810 22:36:51.628689  411387 command_runner.go:124] > Device: 801h/2049d	Inode: 3807833     Links: 1
	I0810 22:36:51.628699  411387 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:36:51.628712  411387 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0810 22:36:51.628723  411387 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0810 22:36:51.628733  411387 command_runner.go:124] > Change: 2021-07-02 14:50:00.997696388 +0000
	I0810 22:36:51.628740  411387 command_runner.go:124] >  Birth: -
	I0810 22:36:51.628859  411387 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 22:36:51.628879  411387 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0810 22:36:51.641740  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0810 22:36:51.977131  411387 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0810 22:36:51.980695  411387 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0810 22:36:51.985923  411387 command_runner.go:124] > serviceaccount/kindnet created
	I0810 22:36:51.993018  411387 command_runner.go:124] > daemonset.apps/kindnet created
	I0810 22:36:51.997568  411387 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0810 22:36:51.997655  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:51.997657  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=multinode-20210810223625-345780 minikube.k8s.io/updated_at=2021_08_10T22_36_51_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:52.012847  411387 command_runner.go:124] > -16
	I0810 22:36:52.012884  411387 ops.go:34] apiserver oom_adj: -16
	I0810 22:36:52.084643  411387 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0810 22:36:52.087628  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:52.093761  411387 command_runner.go:124] > node/multinode-20210810223625-345780 labeled
	I0810 22:36:52.173278  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:52.674117  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:52.738785  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:53.174335  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:53.240282  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:53.674265  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:53.737835  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:54.173594  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:54.240826  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:54.674372  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:54.738158  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:55.173582  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:55.241101  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:55.673981  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:55.805160  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:56.173542  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:56.240174  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:56.674077  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:56.737876  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:57.174409  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:57.241603  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:57.674156  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:57.740937  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:58.173491  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:58.242363  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:58.673854  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:58.739592  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:59.174424  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:59.239179  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:36:59.673694  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:36:59.739779  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:00.174381  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:00.241369  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:00.674161  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:00.740886  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:01.174425  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:03.430111  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:03.434207  411387 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.259732942s)
	I0810 22:37:03.673518  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:06.254989  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:06.255034  411387 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.581479454s)
	I0810 22:37:06.673516  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:06.740761  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:07.174395  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:07.241490  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:07.674313  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:07.740506  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:08.174037  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:08.241246  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:08.673780  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:08.740090  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:09.174455  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:09.238626  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:09.673771  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:09.790642  411387 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:37:10.174180  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:37:10.238962  411387 command_runner.go:124] > NAME      SECRETS   AGE
	I0810 22:37:10.238990  411387 command_runner.go:124] > default   1         0s
	I0810 22:37:10.241229  411387 kubeadm.go:985] duration metric: took 18.24365085s to wait for elevateKubeSystemPrivileges.
	I0810 22:37:10.241260  411387 kubeadm.go:392] StartCluster complete in 36.431914074s
	I0810 22:37:10.241284  411387 settings.go:142] acquiring lock: {Name:mka213f92e424859b3fea9ed3e06c1529c3d79c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:37:10.241397  411387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:37:10.243017  411387 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mk4b0a8134f819d1f0c4fc03757f6964ae0e24de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:37:10.243750  411387 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:37:10.244376  411387 kapi.go:59] client config for multinode-20210810223625-345780: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223
625-345780/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:37:10.244941  411387 cert_rotation.go:137] Starting client certificate rotation controller
	I0810 22:37:10.246265  411387 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0810 22:37:10.246286  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:10.246294  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:10.246300  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:10.254212  411387 round_trippers.go:457] Response Status: 200 OK in 7 milliseconds
	I0810 22:37:10.254235  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:10.254240  411387 round_trippers.go:463]     Content-Length: 291
	I0810 22:37:10.254244  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:10 GMT
	I0810 22:37:10.254247  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:10.254250  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:10.254253  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:10.254256  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:10.254283  411387 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9ed71d10-c7c4-45dd-82ba-367c39a64ef1","resourceVersion":"254","creationTimestamp":"2021-08-10T22:36:51Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0810 22:37:10.254937  411387 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9ed71d10-c7c4-45dd-82ba-367c39a64ef1","resourceVersion":"254","creationTimestamp":"2021-08-10T22:36:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0810 22:37:10.254999  411387 round_trippers.go:432] PUT https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0810 22:37:10.255010  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:10.255015  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:10.255019  411387 round_trippers.go:442]     Content-Type: application/json
	I0810 22:37:10.255025  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:10.258408  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:10.258436  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:10.258444  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:10.258449  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:10.258453  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:10.258459  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:10.258464  411387 round_trippers.go:463]     Content-Length: 291
	I0810 22:37:10.258469  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:10 GMT
	I0810 22:37:10.258497  411387 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9ed71d10-c7c4-45dd-82ba-367c39a64ef1","resourceVersion":"409","creationTimestamp":"2021-08-10T22:36:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0810 22:37:10.759168  411387 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0810 22:37:10.759208  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:10.759220  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:10.759227  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:10.761571  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:10.761594  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:10.761600  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:10.761604  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:10.761607  411387 round_trippers.go:463]     Content-Length: 291
	I0810 22:37:10.761610  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:10 GMT
	I0810 22:37:10.761613  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:10.761616  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:10.761648  411387 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9ed71d10-c7c4-45dd-82ba-367c39a64ef1","resourceVersion":"452","creationTimestamp":"2021-08-10T22:36:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0810 22:37:10.761817  411387 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210810223625-345780" rescaled to 1
	I0810 22:37:10.761874  411387 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:37:10.764159  411387 out.go:177] * Verifying Kubernetes components...
	I0810 22:37:10.764231  411387 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:37:10.761915  411387 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0810 22:37:10.761935  411387 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0810 22:37:10.764328  411387 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210810223625-345780"
	I0810 22:37:10.764333  411387 addons.go:59] Setting default-storageclass=true in profile "multinode-20210810223625-345780"
	I0810 22:37:10.764353  411387 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210810223625-345780"
	I0810 22:37:10.764356  411387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210810223625-345780"
	W0810 22:37:10.764362  411387 addons.go:147] addon storage-provisioner should already be in state true
	I0810 22:37:10.764395  411387 host.go:66] Checking if "multinode-20210810223625-345780" exists ...
	I0810 22:37:10.764720  411387 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780 --format={{.State.Status}}
	I0810 22:37:10.764949  411387 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780 --format={{.State.Status}}
	I0810 22:37:10.809221  411387 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:37:10.809555  411387 kapi.go:59] client config for multinode-20210810223625-345780: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223
625-345780/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:37:10.811117  411387 round_trippers.go:432] GET https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0810 22:37:10.811134  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:10.811139  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:10.811143  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:10.816297  411387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:37:10.813653  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:10.816407  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:10.816418  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:10.816423  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:10.816426  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:10.816429  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:10.816433  411387 round_trippers.go:463]     Content-Length: 109
	I0810 22:37:10.816436  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:10 GMT
	I0810 22:37:10.816437  411387 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:37:10.816449  411387 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0810 22:37:10.816460  411387 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"452"},"items":[]}
	I0810 22:37:10.816521  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:37:10.817262  411387 addons.go:135] Setting addon default-storageclass=true in "multinode-20210810223625-345780"
	W0810 22:37:10.817283  411387 addons.go:147] addon default-storageclass should already be in state true
	I0810 22:37:10.817311  411387 host.go:66] Checking if "multinode-20210810223625-345780" exists ...
	I0810 22:37:10.817707  411387 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780 --format={{.State.Status}}
	I0810 22:37:10.845269  411387 command_runner.go:124] > apiVersion: v1
	I0810 22:37:10.845298  411387 command_runner.go:124] > data:
	I0810 22:37:10.845306  411387 command_runner.go:124] >   Corefile: |
	I0810 22:37:10.845313  411387 command_runner.go:124] >     .:53 {
	I0810 22:37:10.845320  411387 command_runner.go:124] >         errors
	I0810 22:37:10.845327  411387 command_runner.go:124] >         health {
	I0810 22:37:10.845335  411387 command_runner.go:124] >            lameduck 5s
	I0810 22:37:10.845348  411387 command_runner.go:124] >         }
	I0810 22:37:10.845355  411387 command_runner.go:124] >         ready
	I0810 22:37:10.845369  411387 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0810 22:37:10.845381  411387 command_runner.go:124] >            pods insecure
	I0810 22:37:10.845394  411387 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0810 22:37:10.845407  411387 command_runner.go:124] >            ttl 30
	I0810 22:37:10.845418  411387 command_runner.go:124] >         }
	I0810 22:37:10.845425  411387 command_runner.go:124] >         prometheus :9153
	I0810 22:37:10.845438  411387 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0810 22:37:10.845450  411387 command_runner.go:124] >            max_concurrent 1000
	I0810 22:37:10.845461  411387 command_runner.go:124] >         }
	I0810 22:37:10.845469  411387 command_runner.go:124] >         cache 30
	I0810 22:37:10.845481  411387 command_runner.go:124] >         loop
	I0810 22:37:10.845492  411387 command_runner.go:124] >         reload
	I0810 22:37:10.845503  411387 command_runner.go:124] >         loadbalance
	I0810 22:37:10.845514  411387 command_runner.go:124] >     }
	I0810 22:37:10.845521  411387 command_runner.go:124] > kind: ConfigMap
	I0810 22:37:10.845532  411387 command_runner.go:124] > metadata:
	I0810 22:37:10.845549  411387 command_runner.go:124] >   creationTimestamp: "2021-08-10T22:36:51Z"
	I0810 22:37:10.845560  411387 command_runner.go:124] >   name: coredns
	I0810 22:37:10.845563  411387 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:37:10.845572  411387 command_runner.go:124] >   namespace: kube-system
	I0810 22:37:10.845583  411387 command_runner.go:124] >   resourceVersion: "248"
	I0810 22:37:10.845594  411387 command_runner.go:124] >   uid: 6d9276ca-3491-418b-a2f9-92e5dd2d3daa
	I0810 22:37:10.845777  411387 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0810 22:37:10.845788  411387 kapi.go:59] client config for multinode-20210810223625-345780: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223
625-345780/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:37:10.847288  411387 node_ready.go:35] waiting up to 6m0s for node "multinode-20210810223625-345780" to be "Ready" ...
	I0810 22:37:10.847368  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:10.847377  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:10.847382  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:10.847386  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:10.849643  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:10.849681  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:10.849693  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:10.849703  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:10.849709  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:10.849713  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:10.849718  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:10 GMT
	I0810 22:37:10.849867  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:10.851550  411387 node_ready.go:49] node "multinode-20210810223625-345780" has status "Ready":"True"
	I0810 22:37:10.851573  411387 node_ready.go:38] duration metric: took 4.256881ms waiting for node "multinode-20210810223625-345780" to be "Ready" ...
	I0810 22:37:10.851585  411387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:37:10.851680  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0810 22:37:10.851697  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:10.851705  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:10.851710  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:10.855481  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:10.855496  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:10.855503  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:10.855507  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:10.855512  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:10.855517  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:10.855521  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:10 GMT
	I0810 22:37:10.856275  411387 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"452"},"items":[{"metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 56061 chars]
	I0810 22:37:10.865021  411387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace to be "Ready" ...
	I0810 22:37:10.865167  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:10.865182  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:10.865189  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:10.865198  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:10.865646  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa Username:docker}
	I0810 22:37:10.867712  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:10.867729  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:10.867735  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:10.867739  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:10.867751  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:10.867759  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:10.867763  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:10 GMT
	I0810 22:37:10.867881  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:10.870762  411387 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0810 22:37:10.870784  411387 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0810 22:37:10.870848  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:37:10.872009  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:10.872030  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:10.872037  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:10.872043  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:10.875317  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:10.875335  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:10.875341  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:10.875345  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:10.875353  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:10.875357  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:10.875362  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:10 GMT
	I0810 22:37:10.875496  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:10.919845  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa Username:docker}
	I0810 22:37:10.978452  411387 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:37:11.072319  411387 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0810 22:37:11.262229  411387 command_runner.go:124] > configmap/coredns replaced
	I0810 22:37:11.266491  411387 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0810 22:37:11.367436  411387 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0810 22:37:11.372672  411387 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0810 22:37:11.377016  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:11.377044  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:11.377054  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:11.377062  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:11.378405  411387 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0810 22:37:11.379867  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:11.379888  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:11.379894  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:11.379899  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:11.379904  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:11.379908  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:11 GMT
	I0810 22:37:11.379913  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:11.380024  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:11.380461  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:11.380478  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:11.380486  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:11.380491  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:11.382367  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:11.382389  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:11.382396  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:11 GMT
	I0810 22:37:11.382401  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:11.382405  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:11.382410  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:11.382415  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:11.382563  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:11.384351  411387 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0810 22:37:11.391354  411387 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0810 22:37:11.403431  411387 command_runner.go:124] > pod/storage-provisioner created
	I0810 22:37:11.474007  411387 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0810 22:37:11.482637  411387 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0810 22:37:11.482683  411387 addons.go:344] enableAddons completed in 720.751324ms
	I0810 22:37:11.876169  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:11.876196  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:11.876204  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:11.876210  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:11.879125  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:11.879149  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:11.879156  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:11.879161  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:11.879165  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:11 GMT
	I0810 22:37:11.879169  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:11.879174  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:11.879327  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:11.880296  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:11.880326  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:11.880334  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:11.880350  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:11.882618  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:11.882665  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:11.882691  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:11.882710  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:11.882743  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:11.882756  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:11.882762  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:11 GMT
	I0810 22:37:11.882894  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:12.375993  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:12.376024  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:12.376032  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:12.376037  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:12.379989  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:12.380021  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:12.380028  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:12.380034  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:12.380040  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:12.380046  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:12.380051  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:12 GMT
	I0810 22:37:12.380186  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:12.380631  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:12.380653  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:12.380659  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:12.380664  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:12.383107  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:12.383131  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:12.383138  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:12.383143  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:12 GMT
	I0810 22:37:12.383148  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:12.383152  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:12.383156  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:12.383370  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:12.876843  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:12.876873  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:12.876881  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:12.876888  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:12.879283  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:12.879307  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:12.879315  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:12.879320  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:12.879324  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:12.879329  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:12.879333  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:12 GMT
	I0810 22:37:12.879469  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:12.879867  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:12.879882  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:12.879888  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:12.879891  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:12.883760  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:12.883786  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:12.883852  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:12 GMT
	I0810 22:37:12.883864  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:12.883869  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:12.883873  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:12.883878  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:12.883998  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:12.884255  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:13.376467  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:13.376496  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:13.376503  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:13.376509  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:13.378954  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:13.378974  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:13.378979  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:13.378983  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:13 GMT
	I0810 22:37:13.378986  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:13.378989  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:13.378992  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:13.379075  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:13.379420  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:13.379433  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:13.379439  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:13.379443  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:13.381195  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:13.381212  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:13.381217  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:13 GMT
	I0810 22:37:13.381221  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:13.381227  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:13.381232  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:13.381236  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:13.381335  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:13.875981  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:13.876009  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:13.876016  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:13.876020  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:13.878495  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:13.878537  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:13.878542  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:13 GMT
	I0810 22:37:13.878546  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:13.878549  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:13.878552  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:13.878556  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:13.878698  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:13.879093  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:13.879108  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:13.879113  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:13.879119  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:13.881031  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:13.881053  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:13.881058  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:13.881062  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:13.881065  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:13.881068  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:13.881074  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:13 GMT
	I0810 22:37:13.881206  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:14.376783  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:14.376814  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:14.376820  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:14.376824  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:14.379422  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:14.379456  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:14.379463  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:14.379468  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:14.379472  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:14.379475  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:14.379478  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:14 GMT
	I0810 22:37:14.379556  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:14.379877  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:14.379892  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:14.379897  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:14.379901  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:14.381606  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:14.381624  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:14.381635  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:14.381640  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:14.381645  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:14.381652  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:14.381657  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:14 GMT
	I0810 22:37:14.381737  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:14.876324  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:14.876354  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:14.876361  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:14.876365  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:14.878858  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:14.878883  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:14.878888  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:14.878892  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:14.878897  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:14.878902  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:14.878906  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:14 GMT
	I0810 22:37:14.878994  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:14.879332  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:14.879345  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:14.879349  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:14.879353  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:14.881064  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:14.881086  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:14.881092  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:14.881097  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:14.881105  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:14.881109  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:14.881113  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:14 GMT
	I0810 22:37:14.883380  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:15.376236  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:15.376266  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:15.376272  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:15.376276  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:15.378709  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:15.378728  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:15.378733  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:15.378736  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:15.378739  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:15 GMT
	I0810 22:37:15.378742  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:15.378745  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:15.378836  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:15.379211  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:15.379226  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:15.379231  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:15.379235  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:15.380999  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:15.381021  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:15.381027  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:15.381032  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:15 GMT
	I0810 22:37:15.381036  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:15.381039  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:15.381043  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:15.381160  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:15.381461  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:15.876844  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:15.876871  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:15.876878  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:15.876882  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:15.879294  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:15.879315  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:15.879322  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:15 GMT
	I0810 22:37:15.879327  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:15.879332  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:15.879336  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:15.879347  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:15.879480  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:15.879833  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:15.879849  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:15.879856  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:15.879861  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:15.881508  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:15.881525  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:15.881536  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:15.881542  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:15.881546  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:15.881551  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:15.881555  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:15 GMT
	I0810 22:37:15.881665  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:16.376299  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:16.376327  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:16.376335  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:16.376341  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:16.378764  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:16.378785  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:16.378791  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:16.378795  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:16.378798  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:16.378802  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:16.378805  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:16 GMT
	I0810 22:37:16.378887  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:16.379228  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:16.379240  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:16.379245  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:16.379249  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:16.381061  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:16.381083  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:16.381090  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:16.381095  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:16.381100  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:16.381105  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:16.381110  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:16 GMT
	I0810 22:37:16.381195  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:16.876859  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:16.876887  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:16.876894  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:16.876899  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:16.879237  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:16.879258  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:16.879263  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:16.879267  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:16.879270  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:16.879273  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:16.879277  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:16 GMT
	I0810 22:37:16.879361  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:16.879694  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:16.879708  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:16.879713  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:16.879719  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:16.881434  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:16.881455  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:16.881462  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:16.881468  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:16 GMT
	I0810 22:37:16.881472  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:16.881477  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:16.881480  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:16.881587  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:17.376148  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:17.376177  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:17.376184  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:17.376189  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:17.378660  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:17.378682  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:17.378689  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:17.378695  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:17.378699  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:17.378704  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:17 GMT
	I0810 22:37:17.378708  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:17.378801  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:17.379173  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:17.379189  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:17.379194  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:17.379199  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:17.380977  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:17.380996  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:17.381003  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:17.381008  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:17 GMT
	I0810 22:37:17.381011  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:17.381015  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:17.381018  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:17.381115  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:17.876800  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:17.876829  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:17.876836  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:17.876841  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:17.879282  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:17.879307  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:17.879314  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:17.879319  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:17.879324  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:17.879329  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:17 GMT
	I0810 22:37:17.879333  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:17.879437  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:17.879785  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:17.879798  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:17.879803  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:17.879809  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:17.881543  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:17.881560  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:17.881565  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:17.881568  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:17.881571  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:17.881574  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:17.881578  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:17 GMT
	I0810 22:37:17.881669  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:17.881921  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:18.376262  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:18.376288  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:18.376294  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:18.376298  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:18.380609  411387 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:37:18.380632  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:18.380658  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:18 GMT
	I0810 22:37:18.380666  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:18.380677  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:18.380681  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:18.380686  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:18.381216  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:18.381754  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:18.381772  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:18.381778  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:18.381794  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:18.396915  411387 round_trippers.go:457] Response Status: 200 OK in 15 milliseconds
	I0810 22:37:18.396958  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:18.396965  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:18.396970  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:18.396974  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:18.396979  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:18.396983  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:18 GMT
	I0810 22:37:18.397073  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:18.876642  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:18.876670  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:18.876676  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:18.876680  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:18.879209  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:18.879232  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:18.879242  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:18.879247  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:18.879251  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:18 GMT
	I0810 22:37:18.879255  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:18.879259  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:18.879372  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:18.879708  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:18.879720  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:18.879725  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:18.879729  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:18.881573  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:18.881595  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:18.881601  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:18.881606  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:18.881611  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:18.881614  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:18.881617  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:18 GMT
	I0810 22:37:18.881715  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:19.376259  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:19.376287  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:19.376293  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:19.376297  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:19.378764  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:19.378785  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:19.378792  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:19.378796  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:19.378801  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:19.378806  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:19.378811  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:19 GMT
	I0810 22:37:19.378921  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:19.379267  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:19.379282  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:19.379286  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:19.379290  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:19.380999  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:19.381018  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:19.381024  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:19.381028  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:19.381031  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:19.381034  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:19.381037  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:19 GMT
	I0810 22:37:19.381149  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:19.876753  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:19.876781  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:19.876787  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:19.876791  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:19.879155  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:19.879177  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:19.879183  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:19.879186  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:19.879189  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:19.879195  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:19.879198  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:19 GMT
	I0810 22:37:19.879297  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:19.879711  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:19.879732  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:19.879739  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:19.879745  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:19.881583  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:19.881601  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:19.881608  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:19.881613  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:19.881617  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:19.881622  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:19.881626  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:19 GMT
	I0810 22:37:19.881735  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:19.882091  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:20.376600  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:20.376622  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:20.376629  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:20.376633  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:20.379036  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:20.379059  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:20.379066  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:20.379071  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:20.379080  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:20.379085  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:20.379092  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:20 GMT
	I0810 22:37:20.379211  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:20.379590  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:20.379605  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:20.379610  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:20.379613  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:20.381356  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:20.381377  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:20.381384  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:20.381387  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:20.381391  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:20.381396  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:20.381400  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:20 GMT
	I0810 22:37:20.381542  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:20.876160  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:20.876186  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:20.876194  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:20.876200  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:20.878579  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:20.878601  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:20.878606  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:20.878610  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:20 GMT
	I0810 22:37:20.878613  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:20.878616  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:20.878619  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:20.878789  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:20.879137  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:20.879149  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:20.879154  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:20.879160  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:20.880993  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:20.881011  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:20.881018  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:20.881023  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:20.881028  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:20 GMT
	I0810 22:37:20.881032  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:20.881036  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:20.881188  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:21.376846  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:21.376876  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:21.376885  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:21.376891  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:21.379527  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:21.379547  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:21.379553  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:21 GMT
	I0810 22:37:21.379558  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:21.379562  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:21.379568  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:21.379572  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:21.379680  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:21.380049  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:21.380072  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:21.380078  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:21.380082  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:21.381956  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:21.381976  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:21.381982  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:21.381986  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:21.381991  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:21.381995  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:21.382000  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:21 GMT
	I0810 22:37:21.382099  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:21.876749  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:21.876778  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:21.876784  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:21.876789  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:21.879555  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:21.879580  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:21.879587  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:21 GMT
	I0810 22:37:21.879591  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:21.879595  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:21.879598  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:21.879602  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:21.879762  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:21.880127  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:21.880145  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:21.880151  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:21.880155  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:21.882165  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:21.882197  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:21.882203  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:21.882208  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:21.882212  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:21.882216  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:21.882220  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:21 GMT
	I0810 22:37:21.882342  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:21.882669  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:22.377025  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:22.377057  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:22.377064  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:22.377068  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:22.379447  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:22.379468  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:22.379475  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:22 GMT
	I0810 22:37:22.379480  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:22.379485  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:22.379490  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:22.379498  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:22.379606  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:22.380138  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:22.380161  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:22.380168  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:22.380175  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:22.381989  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:22.382011  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:22.382018  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:22.382023  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:22.382027  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:22.382032  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:22 GMT
	I0810 22:37:22.382036  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:22.382147  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:22.876769  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:22.876795  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:22.876803  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:22.876809  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:22.879188  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:22.879207  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:22.879213  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:22.879216  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:22.879220  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:22.879223  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:22 GMT
	I0810 22:37:22.879226  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:22.879348  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:22.879720  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:22.879735  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:22.879740  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:22.879745  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:22.881503  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:22.881520  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:22.881526  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:22.881531  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:22.881536  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:22 GMT
	I0810 22:37:22.881540  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:22.881545  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:22.881663  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:23.376210  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:23.376236  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:23.376244  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:23.376249  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:23.378823  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:23.378852  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:23.378859  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:23.378864  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:23 GMT
	I0810 22:37:23.378869  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:23.378874  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:23.378882  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:23.378993  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:23.379351  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:23.379366  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:23.379373  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:23.379379  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:23.381102  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:23.381119  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:23.381125  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:23.381130  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:23 GMT
	I0810 22:37:23.381135  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:23.381140  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:23.381145  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:23.381242  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:23.876915  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:23.876976  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:23.876985  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:23.876991  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:23.879330  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:23.879351  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:23.879358  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:23 GMT
	I0810 22:37:23.879363  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:23.879367  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:23.879370  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:23.879373  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:23.879534  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:23.879970  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:23.879988  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:23.879994  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:23.879998  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:23.883714  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:23.883736  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:23.883743  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:23.883748  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:23.883751  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:23.883754  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:23.883758  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:23 GMT
	I0810 22:37:23.883870  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:23.884206  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:24.376508  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:24.376536  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:24.376542  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:24.376546  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:24.379095  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:24.379115  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:24.379120  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:24.379127  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:24.379130  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:24.379133  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:24.379137  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:24 GMT
	I0810 22:37:24.379305  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:24.379862  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:24.379885  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:24.379893  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:24.379899  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:24.381710  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:24.381729  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:24.381742  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:24 GMT
	I0810 22:37:24.381747  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:24.381753  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:24.381759  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:24.381762  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:24.381868  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:24.876638  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:24.876667  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:24.876673  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:24.876678  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:24.881084  411387 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:37:24.881104  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:24.881111  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:24.881116  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:24.881120  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:24.881124  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:24.881128  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:24 GMT
	I0810 22:37:24.881231  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:24.881583  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:24.881600  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:24.881607  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:24.881613  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:24.883274  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:24.883310  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:24.883317  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:24.883321  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:24.883326  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:24.883331  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:24.883335  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:24 GMT
	I0810 22:37:24.883443  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:25.376262  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:25.376287  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:25.376293  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:25.376297  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:25.378870  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:25.378891  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:25.378897  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:25.378902  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:25.378907  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:25.378912  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:25.378916  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:25 GMT
	I0810 22:37:25.379028  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:25.379389  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:25.379403  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:25.379408  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:25.379412  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:25.381229  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:25.381248  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:25.381254  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:25 GMT
	I0810 22:37:25.381257  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:25.381260  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:25.381264  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:25.381271  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:25.381375  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:25.876374  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:25.876403  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:25.876409  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:25.876413  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:25.878774  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:25.878802  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:25.878810  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:25.878814  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:25.878822  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:25 GMT
	I0810 22:37:25.878827  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:25.878831  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:25.878930  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:25.879265  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:25.879278  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:25.879283  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:25.879287  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:25.880975  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:25.880995  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:25.881002  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:25.881008  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:25.881013  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:25.881018  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:25.881023  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:25 GMT
	I0810 22:37:25.881181  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:26.376841  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:26.376871  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:26.376877  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:26.376881  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:26.379518  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:26.379544  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:26.379550  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:26.379556  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:26.379565  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:26.379578  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:26.379584  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:26 GMT
	I0810 22:37:26.379683  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:26.380055  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:26.380069  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:26.380074  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:26.380078  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:26.381935  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:26.381953  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:26.381960  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:26.381965  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:26.381969  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:26.381974  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:26.381980  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:26 GMT
	I0810 22:37:26.382063  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:26.382298  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:26.876683  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:26.876708  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:26.876713  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:26.876717  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:26.878980  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:26.879001  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:26.879006  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:26.879010  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:26.879013  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:26.879016  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:26.879018  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:26 GMT
	I0810 22:37:26.879130  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:26.879468  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:26.879482  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:26.879487  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:26.879491  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:26.881186  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:26.881207  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:26.881213  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:26.881216  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:26.881220  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:26 GMT
	I0810 22:37:26.881223  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:26.881226  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:26.881330  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:27.376995  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:27.377024  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:27.377031  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:27.377035  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:27.379513  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:27.379536  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:27.379542  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:27.379548  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:27.379552  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:27.379556  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:27 GMT
	I0810 22:37:27.379559  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:27.379692  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:27.380016  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:27.380032  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:27.380037  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:27.380041  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:27.381719  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:27.381741  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:27.381749  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:27.381754  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:27.381759  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:27.381764  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:27.381769  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:27 GMT
	I0810 22:37:27.381950  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:27.876438  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:27.876467  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:27.876474  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:27.876478  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:27.881196  411387 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:37:27.881220  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:27.881226  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:27.881229  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:27.881232  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:27.881235  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:27.881238  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:27 GMT
	I0810 22:37:27.881347  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:27.881699  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:27.881717  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:27.881723  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:27.881730  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:27.883424  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:27.883442  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:27.883447  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:27.883450  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:27.883454  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:27.883457  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:27 GMT
	I0810 22:37:27.883460  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:27.883616  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:28.376185  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:28.376221  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:28.376230  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:28.376235  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:28.378914  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:28.378940  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:28.378946  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:28.378949  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:28.378953  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:28.378956  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:28.378959  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:28 GMT
	I0810 22:37:28.379095  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:28.379627  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:28.379649  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:28.379657  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:28.379663  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:28.381493  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:28.381516  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:28.381524  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:28 GMT
	I0810 22:37:28.381529  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:28.381534  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:28.381540  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:28.381544  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:28.381680  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:28.876280  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:28.876309  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:28.876314  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:28.876319  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:28.881007  411387 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:37:28.881041  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:28.881050  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:28.881054  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:28.881058  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:28.881061  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:28.881064  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:28 GMT
	I0810 22:37:28.881175  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:28.881569  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:28.881586  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:28.881597  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:28.881602  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:28.883676  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:28.883699  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:28.883707  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:28.883711  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:28.883715  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:28.883718  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:28 GMT
	I0810 22:37:28.883722  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:28.883848  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:28.884153  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:29.376423  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:29.376450  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:29.376456  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:29.376460  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:29.379154  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:29.379172  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:29.379177  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:29.379180  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:29.379183  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:29.379187  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:29.379190  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:29 GMT
	I0810 22:37:29.379271  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:29.379641  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:29.379654  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:29.379658  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:29.379662  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:29.381567  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:29.381584  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:29.381589  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:29.381593  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:29.381597  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:29.381603  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:29 GMT
	I0810 22:37:29.381608  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:29.381767  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:29.876292  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:29.876322  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:29.876328  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:29.876345  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:29.878936  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:29.878965  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:29.878973  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:29.878980  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:29.878984  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:29.878988  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:29.878993  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:29 GMT
	I0810 22:37:29.879203  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:29.879557  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:29.879570  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:29.879575  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:29.879578  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:29.883474  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:29.883494  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:29.883500  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:29.883503  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:29.883506  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:29.883509  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:29.883515  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:29 GMT
	I0810 22:37:29.883657  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:30.376670  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:30.376707  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:30.376715  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:30.376720  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:30.379239  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:30.379261  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:30.379266  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:30.379270  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:30.379273  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:30 GMT
	I0810 22:37:30.379276  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:30.379279  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:30.379407  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:30.379766  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:30.379779  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:30.379784  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:30.379788  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:30.381787  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:30.381812  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:30.381818  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:30.381821  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:30.381825  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:30.381828  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:30.381832  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:30 GMT
	I0810 22:37:30.381950  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:30.876530  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:30.876552  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:30.876559  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:30.876563  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:30.879047  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:30.879068  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:30.879094  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:30.879099  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:30 GMT
	I0810 22:37:30.879103  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:30.879108  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:30.879113  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:30.879266  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:30.879630  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:30.879651  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:30.879657  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:30.879663  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:30.881399  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:30.881416  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:30.881420  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:30.881423  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:30.881426  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:30.881430  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:30 GMT
	I0810 22:37:30.881435  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:30.881612  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:31.376148  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:31.376176  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:31.376182  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:31.376187  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:31.378809  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:31.378830  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:31.378838  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:31.378845  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:31 GMT
	I0810 22:37:31.378851  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:31.378856  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:31.378860  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:31.378983  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:31.379428  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:31.379443  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:31.379451  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:31.379458  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:31.381269  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:31.381288  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:31.381294  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:31.381298  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:31 GMT
	I0810 22:37:31.381301  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:31.381305  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:31.381309  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:31.381471  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:31.381764  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:31.875970  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:31.875997  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:31.876003  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:31.876007  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:31.878471  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:31.878495  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:31.878502  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:31.878507  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:31.878512  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:31 GMT
	I0810 22:37:31.878516  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:31.878521  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:31.878685  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:31.879029  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:31.879042  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:31.879047  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:31.879051  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:31.880995  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:31.881016  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:31.881023  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:31.881028  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:31.881033  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:31 GMT
	I0810 22:37:31.881038  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:31.881042  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:31.881140  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:32.376864  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:32.376894  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:32.376905  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:32.376909  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:32.379314  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:32.379349  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:32.379357  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:32.379363  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:32.379368  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:32.379372  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:32.379377  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:32 GMT
	I0810 22:37:32.379496  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:32.379849  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:32.379862  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:32.379866  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:32.379875  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:32.381582  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:32.381600  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:32.381605  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:32.381608  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:32.381611  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:32.381614  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:32.381619  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:32 GMT
	I0810 22:37:32.381708  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:32.876283  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:32.876309  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:32.876315  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:32.876319  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:32.878909  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:32.878935  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:32.878941  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:32.878944  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:32.878948  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:32 GMT
	I0810 22:37:32.878951  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:32.878953  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:32.879075  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:32.879456  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:32.879475  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:32.879480  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:32.879484  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:32.881274  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:32.881328  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:32.881345  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:32.881361  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:32.881369  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:32.881378  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:32.881383  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:32 GMT
	I0810 22:37:32.881540  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:33.376103  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:33.376139  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:33.376148  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:33.376154  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:33.378738  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:33.378772  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:33.378777  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:33.378781  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:33.378784  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:33.378788  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:33 GMT
	I0810 22:37:33.378791  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:33.378909  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:33.379306  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:33.379319  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:33.379325  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:33.379329  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:33.381093  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:33.381115  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:33.381121  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:33.381126  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:33.381130  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:33 GMT
	I0810 22:37:33.381134  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:33.381139  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:33.381354  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:33.876187  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:33.876217  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:33.876225  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:33.876231  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:33.878729  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:33.878746  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:33.878751  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:33.878754  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:33.878762  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:33.878767  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:33.878773  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:33 GMT
	I0810 22:37:33.878895  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:33.879252  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:33.879266  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:33.879271  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:33.879276  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:33.880838  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:33.880855  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:33.880865  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:33.880868  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:33.880872  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:33.880875  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:33 GMT
	I0810 22:37:33.880878  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:33.881063  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:33.881310  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:34.376753  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:34.376778  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:34.376784  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:34.376788  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:34.379291  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:34.379317  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:34.379323  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:34 GMT
	I0810 22:37:34.379326  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:34.379330  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:34.379334  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:34.379337  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:34.379462  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:34.379831  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:34.379846  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:34.379851  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:34.379854  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:34.381647  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:34.381671  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:34.381678  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:34.381689  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:34.381694  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:34.381699  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:34.381702  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:34 GMT
	I0810 22:37:34.381899  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:34.876367  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:34.876393  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:34.876399  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:34.876403  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:34.879199  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:34.879225  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:34.879232  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:34 GMT
	I0810 22:37:34.879237  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:34.879242  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:34.879247  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:34.879251  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:34.879410  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:34.879862  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:34.879883  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:34.879898  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:34.879903  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:34.884110  411387 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:37:34.884135  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:34.884156  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:34.884162  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:34.884168  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:34 GMT
	I0810 22:37:34.884172  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:34.884176  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:34.884385  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:35.376166  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:35.376193  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:35.376198  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:35.376203  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:35.378913  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:35.378936  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:35.378941  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:35.378945  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:35.378948  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:35.378952  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:35.378961  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:35 GMT
	I0810 22:37:35.379104  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:35.379457  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:35.379471  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:35.379476  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:35.379480  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:35.381154  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:35.381171  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:35.381176  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:35.381179  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:35 GMT
	I0810 22:37:35.381182  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:35.381185  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:35.381188  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:35.381283  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:35.876970  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:35.876998  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:35.877004  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:35.877008  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:35.879396  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:35.879419  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:35.879425  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:35 GMT
	I0810 22:37:35.879430  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:35.879435  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:35.879439  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:35.879444  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:35.879609  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:35.880039  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:35.880053  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:35.880058  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:35.880062  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:35.881794  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:35.881814  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:35.881821  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:35.881826  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:35.881831  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:35.881835  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:35.881840  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:35 GMT
	I0810 22:37:35.881929  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:35.882193  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:36.376558  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:36.376588  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:36.376596  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:36.376602  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:36.379218  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:36.379239  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:36.379245  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:36.379249  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:36.379252  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:36.379255  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:36.379258  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:36 GMT
	I0810 22:37:36.379351  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:36.379732  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:36.379751  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:36.379755  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:36.379759  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:36.381609  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:36.381632  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:36.381639  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:36.381644  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:36.381649  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:36.381653  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:36.381658  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:36 GMT
	I0810 22:37:36.381821  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:36.876387  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:36.876413  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:36.876419  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:36.876423  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:36.878830  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:36.878849  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:36.878854  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:36.878857  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:36.878861  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:36.878864  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:36.878867  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:36 GMT
	I0810 22:37:36.878980  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:36.879326  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:36.879341  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:36.879347  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:36.879351  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:36.881108  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:36.881136  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:36.881143  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:36.881148  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:36.881152  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:36 GMT
	I0810 22:37:36.881157  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:36.881167  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:36.881316  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:37.377008  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:37.377033  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:37.377039  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:37.377043  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:37.379604  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:37.379627  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:37.379633  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:37.379637  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:37.379640  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:37.379644  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:37.379647  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:37 GMT
	I0810 22:37:37.379758  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:37.380148  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:37.380165  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:37.380170  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:37.380185  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:37.382045  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:37.382061  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:37.382066  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:37.382239  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:37.382628  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:37.382669  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:37.382692  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:37 GMT
	I0810 22:37:37.382943  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:37.876004  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:37.876033  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:37.876039  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:37.876044  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:37.878625  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:37.878650  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:37.878656  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:37.878659  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:37.878664  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:37.878669  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:37.878673  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:37 GMT
	I0810 22:37:37.878811  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:37.879252  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:37.879269  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:37.879274  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:37.879278  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:37.881181  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:37.881201  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:37.881208  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:37.881213  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:37.881218  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:37.881223  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:37.881228  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:37 GMT
	I0810 22:37:37.881344  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:38.376974  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:38.377004  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:38.377011  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:38.377034  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:38.379704  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:38.379729  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:38.379736  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:38.379740  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:38.379744  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:38.379749  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:38.379753  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:38 GMT
	I0810 22:37:38.379877  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:38.380272  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:38.380298  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:38.380303  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:38.380308  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:38.382388  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:38.382412  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:38.382420  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:38 GMT
	I0810 22:37:38.382425  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:38.382430  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:38.382435  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:38.382443  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:38.382572  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:38.382834  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:38.875981  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:38.876003  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:38.876009  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:38.876013  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:38.879865  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:38.879884  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:38.879890  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:38.879893  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:38.879897  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:38.879900  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:38.879903  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:38 GMT
	I0810 22:37:38.880026  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:38.880394  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:38.880410  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:38.880416  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:38.880423  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:38.883970  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:38.883986  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:38.883991  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:38.883995  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:38.883998  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:38 GMT
	I0810 22:37:38.884001  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:38.884005  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:38.884096  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:39.376711  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:39.376753  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:39.376759  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:39.376764  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:39.379336  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:39.379362  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:39.379370  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:39.379375  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:39 GMT
	I0810 22:37:39.379381  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:39.379385  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:39.379390  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:39.379517  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:39.379869  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:39.379883  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:39.379888  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:39.379892  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:39.381726  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:39.381746  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:39.381751  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:39.381754  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:39.381757  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:39.381760  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:39.381763  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:39 GMT
	I0810 22:37:39.381866  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:39.876478  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:39.876510  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:39.876516  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:39.876520  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:39.879122  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:39.879149  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:39.879155  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:39.879164  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:39.879169  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:39.879173  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:39.879178  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:39 GMT
	I0810 22:37:39.879280  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:39.879726  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:39.879741  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:39.879746  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:39.879750  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:39.883822  411387 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:37:39.883843  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:39.883849  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:39 GMT
	I0810 22:37:39.883852  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:39.883855  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:39.883859  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:39.883865  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:39.884086  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:40.375928  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:40.375956  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:40.375964  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:40.375970  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:40.378586  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:40.378615  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:40.378622  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:40.378628  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:40.378632  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:40.378637  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:40 GMT
	I0810 22:37:40.378641  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:40.378758  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:40.379136  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:40.379152  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:40.379157  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:40.379161  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:40.381088  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:40.381109  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:40.381114  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:40.381120  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:40.381123  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:40 GMT
	I0810 22:37:40.381126  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:40.381129  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:40.381238  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:40.876844  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:40.876892  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:40.876900  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:40.876907  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:40.879453  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:40.879476  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:40.879483  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:40.879488  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:40 GMT
	I0810 22:37:40.879493  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:40.879498  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:40.879502  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:40.879625  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:40.879992  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:40.880010  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:40.880017  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:40.880024  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:40.883444  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:40.883462  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:40.883469  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:40.883474  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:40.883478  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:40.883483  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:40.883487  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:40 GMT
	I0810 22:37:40.883655  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:40.883916  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:41.375997  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:41.376022  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:41.376030  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:41.376035  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:41.378676  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:41.378700  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:41.378710  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:41.378716  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:41.378721  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:41.378726  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:41 GMT
	I0810 22:37:41.378731  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:41.378855  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:41.379229  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:41.379243  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:41.379250  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:41.379257  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:41.381108  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:41.381125  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:41.381130  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:41.381135  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:41.381138  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:41 GMT
	I0810 22:37:41.381141  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:41.381144  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:41.381244  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:41.876883  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:41.876907  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:41.876912  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:41.876950  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:41.879423  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:41.879463  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:41.879468  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:41.879474  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:41 GMT
	I0810 22:37:41.879479  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:41.879484  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:41.879488  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:41.879596  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:41.879951  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:41.879965  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:41.879970  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:41.879973  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:41.883295  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:41.883313  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:41.883317  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:41.883321  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:41.883324  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:41.883327  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:41.883330  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:41 GMT
	I0810 22:37:41.883442  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:42.376052  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:42.376078  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:42.376085  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:42.376089  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:42.378631  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:42.378660  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:42.378673  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:42.378677  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:42.378681  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:42.378684  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:42.378687  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:42 GMT
	I0810 22:37:42.378790  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:42.379126  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:42.379138  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:42.379142  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:42.379147  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:42.380859  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:42.380879  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:42.380884  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:42.380888  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:42.380892  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:42.380895  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:42.380898  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:42 GMT
	I0810 22:37:42.381045  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:42.876711  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:42.876734  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:42.876740  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:42.876744  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:42.879166  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:42.879190  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:42.879195  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:42.879200  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:42.879205  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:42.879210  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:42 GMT
	I0810 22:37:42.879215  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:42.879331  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:42.879668  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:42.879682  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:42.879686  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:42.879690  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:42.883958  411387 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:37:42.883980  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:42.883986  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:42.883989  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:42.883993  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:42 GMT
	I0810 22:37:42.883996  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:42.883999  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:42.884129  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:42.884404  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:43.376854  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:43.376882  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:43.376891  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:43.376895  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:43.379591  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:43.379611  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:43.379617  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:43.379634  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:43 GMT
	I0810 22:37:43.379637  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:43.379640  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:43.379643  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:43.379764  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:43.380140  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:43.380153  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:43.380158  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:43.380162  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:43.382158  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:43.382182  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:43.382190  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:43.382195  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:43 GMT
	I0810 22:37:43.382200  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:43.382204  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:43.382208  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:43.382357  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:43.876784  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:43.876808  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:43.876814  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:43.876818  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:43.879337  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:43.879362  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:43.879367  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:43.879371  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:43.879374  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:43.879377  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:43.879381  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:43 GMT
	I0810 22:37:43.879523  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:43.879918  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:43.879933  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:43.879937  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:43.879942  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:43.881868  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:43.881887  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:43.881893  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:43.881898  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:43.881902  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:43.881906  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:43.881909  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:43 GMT
	I0810 22:37:43.882011  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:44.376632  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:44.376667  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:44.376673  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:44.376678  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:44.379156  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:44.379177  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:44.379183  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:44.379186  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:44.379189  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:44.379194  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:44.379199  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:44 GMT
	I0810 22:37:44.379330  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:44.379697  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:44.379711  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:44.379716  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:44.379720  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:44.381720  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:44.381738  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:44.381744  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:44.381748  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:44.381756  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:44.381760  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:44.381765  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:44 GMT
	I0810 22:37:44.381926  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:44.876550  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:44.876574  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:44.876580  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:44.876584  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:44.879090  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:44.879115  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:44.879123  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:44.879128  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:44.879140  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:44.879146  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:44 GMT
	I0810 22:37:44.879150  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:44.879284  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:44.879714  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:44.879729  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:44.879736  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:44.879742  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:44.883604  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:44.883622  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:44.883627  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:44.883638  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:44.883641  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:44.883645  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:44.883649  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:44 GMT
	I0810 22:37:44.883767  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:45.376523  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:45.376554  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:45.376561  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:45.376565  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:45.379308  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:45.379340  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:45.379348  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:45.379353  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:45.379358  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:45.379362  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:45.379368  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:45 GMT
	I0810 22:37:45.379500  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:45.379872  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:45.379887  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:45.379894  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:45.379898  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:45.381982  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:45.382008  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:45.382016  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:45.382021  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:45.382025  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:45 GMT
	I0810 22:37:45.382031  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:45.382036  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:45.382147  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:45.382420  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:45.876590  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:45.876615  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:45.876622  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:45.876626  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:45.879066  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:45.879093  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:45.879099  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:45.879103  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:45 GMT
	I0810 22:37:45.879107  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:45.879114  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:45.879118  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:45.879246  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:45.879580  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:45.879594  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:45.879600  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:45.879604  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:45.883287  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:37:45.883311  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:45.883319  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:45.883324  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:45.883328  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:45.883333  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:45.883342  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:45 GMT
	I0810 22:37:45.883434  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:46.376012  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:46.376043  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:46.376050  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:46.376054  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:46.378754  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:46.378780  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:46.378786  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:46.378791  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:46.378794  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:46.378797  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:46.378800  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:46 GMT
	I0810 22:37:46.378884  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:46.379248  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:46.379263  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:46.379268  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:46.379273  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:46.381167  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:46.381189  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:46.381196  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:46.381201  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:46 GMT
	I0810 22:37:46.381206  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:46.381210  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:46.381214  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:46.381338  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:46.875944  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:46.875980  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:46.875987  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:46.875991  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:46.878321  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:46.878350  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:46.878358  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:46.878366  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:46 GMT
	I0810 22:37:46.878372  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:46.878376  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:46.878381  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:46.878594  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:46.879069  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:46.879091  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:46.879098  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:46.879103  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:46.881013  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:46.881033  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:46.881038  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:46 GMT
	I0810 22:37:46.881041  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:46.881044  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:46.881047  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:46.881050  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:46.881189  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:47.376813  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:47.376866  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:47.376874  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:47.376880  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:47.379465  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:47.379486  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:47.379494  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:47.379498  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:47 GMT
	I0810 22:37:47.379506  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:47.379511  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:47.379516  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:47.379660  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:47.380071  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:47.380097  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:47.380105  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:47.380111  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:47.381867  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:47.381885  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:47.381892  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:47.381897  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:47.381905  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:47.381909  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:47.381913  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:47 GMT
	I0810 22:37:47.382012  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:47.876661  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:47.876689  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:47.876701  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:47.876706  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:47.881388  411387 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:37:47.881414  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:47.881420  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:47.881423  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:47.881427  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:47.881430  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:47 GMT
	I0810 22:37:47.881434  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:47.881549  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:47.881912  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:47.881926  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:47.881931  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:47.881935  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:47.883813  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:47.883837  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:47.883843  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:47.883847  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:47.883850  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:47.883854  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:47 GMT
	I0810 22:37:47.883857  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:47.883995  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:47.884366  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:48.376627  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:48.376654  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:48.376660  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:48.376664  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:48.379361  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:48.379385  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:48.379393  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:48.379399  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:48.379405  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:48.379411  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:48.379416  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:48 GMT
	I0810 22:37:48.379556  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:48.380005  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:48.380027  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:48.380035  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:48.380071  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:48.381849  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:48.381867  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:48.381872  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:48 GMT
	I0810 22:37:48.381875  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:48.381878  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:48.381881  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:48.381884  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:48.381992  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:48.876645  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:48.876668  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:48.876674  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:48.876678  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:48.879083  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:48.879106  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:48.879113  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:48.879118  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:48.879123  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:48.879127  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:48.879132  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:48 GMT
	I0810 22:37:48.879249  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:48.879739  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:48.879755  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:48.879765  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:48.879771  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:48.925089  411387 round_trippers.go:457] Response Status: 200 OK in 45 milliseconds
	I0810 22:37:48.925121  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:48.925129  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:48 GMT
	I0810 22:37:48.925132  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:48.925136  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:48.925139  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:48.925143  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:48.925257  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:49.376813  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:49.376849  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:49.376855  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:49.376863  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:49.379487  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:49.379514  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:49.379523  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:49.379528  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:49.379532  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:49.379536  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:49.379541  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:49 GMT
	I0810 22:37:49.379637  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:49.379987  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:49.379999  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:49.380004  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:49.380008  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:49.381777  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:49.381791  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:49.381798  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:49.381801  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:49.381806  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:49.381810  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:49.381814  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:49 GMT
	I0810 22:37:49.381982  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:49.876704  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:49.876733  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:49.876740  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:49.876744  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:49.879281  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:49.879304  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:49.879310  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:49.879314  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:49.879317  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:49.879320  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:49.879323  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:49 GMT
	I0810 22:37:49.879416  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:49.879779  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:49.879796  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:49.879800  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:49.879804  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:49.881672  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:49.881701  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:49.881708  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:49.881714  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:49.881719  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:49.881722  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:49 GMT
	I0810 22:37:49.881725  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:49.881827  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:50.376698  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:50.376728  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:50.376734  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:50.376738  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:50.379250  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:50.379272  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:50.379277  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:50.379280  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:50.379284  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:50.379287  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:50.379293  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:50 GMT
	I0810 22:37:50.379374  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:50.379721  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:50.379736  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:50.379741  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:50.379745  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:50.381697  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:50.381719  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:50.381724  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:50 GMT
	I0810 22:37:50.381727  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:50.381732  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:50.381736  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:50.381741  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:50.381841  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:50.382116  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:50.876555  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:50.876581  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:50.876587  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:50.876591  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:50.879073  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:50.879094  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:50.879099  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:50.879103  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:50.879107  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:50.879112  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:50 GMT
	I0810 22:37:50.879119  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:50.879245  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:50.879620  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:50.879632  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:50.879637  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:50.879642  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:50.881434  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:50.881453  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:50.881460  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:50.881465  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:50.881469  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:50 GMT
	I0810 22:37:50.881474  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:50.881478  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:50.881598  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:51.376147  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:51.376176  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:51.376182  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:51.376187  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:51.378889  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:51.378916  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:51.378924  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:51.378930  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:51.378934  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:51.378939  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:51.378944  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:51 GMT
	I0810 22:37:51.379080  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:51.379444  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:51.379462  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:51.379470  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:51.379476  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:51.381214  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:51.381236  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:51.381242  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:51.381246  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:51.381251  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:51.381257  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:51 GMT
	I0810 22:37:51.381263  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:51.381398  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:51.876959  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:51.876988  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:51.876996  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:51.877003  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:51.879439  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:51.879465  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:51.879472  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:51.879475  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:51.879479  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:51.879482  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:51.879488  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:51 GMT
	I0810 22:37:51.879634  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:51.880050  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:51.880068  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:51.880073  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:51.880077  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:51.885839  411387 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0810 22:37:51.885870  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:51.885876  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:51.885879  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:51.885882  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:51.885886  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:51.885889  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:51 GMT
	I0810 22:37:51.886070  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:52.376767  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:52.376793  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:52.376799  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:52.376803  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:52.379081  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:52.379101  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:52.379106  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:52.379109  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:52.379112  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:52.379118  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:52.379121  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:52 GMT
	I0810 22:37:52.379232  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:52.379612  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:52.379631  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:52.379639  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:52.379645  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:52.381298  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:52.381314  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:52.381319  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:52.381323  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:52.381325  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:52.381329  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:52.381331  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:52 GMT
	I0810 22:37:52.381421  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:52.876041  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:52.876064  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:52.876070  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:52.876074  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:52.878574  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:52.878596  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:52.878603  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:52.878609  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:52.878614  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:52.878618  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:52.878623  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:52 GMT
	I0810 22:37:52.878734  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:52.879064  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:52.879076  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:52.879081  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:52.879085  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:52.880791  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:52.880812  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:52.880817  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:52.880822  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:52.880827  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:52 GMT
	I0810 22:37:52.880832  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:52.880836  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:52.880944  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:52.881193  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:53.376611  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:53.376638  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:53.376643  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:53.376647  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:53.379619  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:53.379643  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:53.379650  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:53.379655  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:53.379658  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:53 GMT
	I0810 22:37:53.379661  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:53.379664  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:53.379795  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:53.380152  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:53.380168  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:53.380176  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:53.380180  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:53.381999  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:53.382019  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:53.382035  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:53.382040  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:53.382043  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:53 GMT
	I0810 22:37:53.382047  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:53.382050  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:53.382144  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:53.876752  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:53.876777  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:53.876783  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:53.876789  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:53.879545  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:53.879566  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:53.879571  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:53.879574  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:53.879578  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:53.879581  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:53 GMT
	I0810 22:37:53.879584  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:53.879681  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:53.880031  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:53.880046  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:53.880051  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:53.880055  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:53.881953  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:53.881976  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:53.881984  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:53.881989  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:53.881993  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:53.881997  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:53.882000  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:53 GMT
	I0810 22:37:53.882094  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:54.376753  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:54.376782  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:54.376788  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:54.376793  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:54.379467  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:54.379491  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:54.379499  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:54.379504  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:54 GMT
	I0810 22:37:54.379508  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:54.379512  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:54.379516  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:54.379605  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:54.379930  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:54.379944  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:54.379950  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:54.379958  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:54.381581  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:54.381611  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:54.381619  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:54 GMT
	I0810 22:37:54.381624  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:54.381629  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:54.381633  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:54.381638  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:54.381735  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:54.876274  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:54.876301  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:54.876307  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:54.876312  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:54.878792  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:54.878814  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:54.878819  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:54.878823  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:54.878826  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:54.878830  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:54.878836  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:54 GMT
	I0810 22:37:54.878963  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:54.879327  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:54.879342  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:54.879346  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:54.879350  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:54.881098  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:54.881119  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:54.881126  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:54.881130  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:54.881135  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:54 GMT
	I0810 22:37:54.881140  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:54.881143  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:54.881316  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:54.881560  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:55.376295  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:55.376323  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:55.376329  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:55.376333  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:55.378835  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:55.378856  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:55.378863  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:55.378868  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:55.378873  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:55.378877  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:55.378882  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:55 GMT
	I0810 22:37:55.379017  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:55.379416  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:55.379431  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:55.379436  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:55.379440  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:55.381256  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:55.381277  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:55.381284  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:55.381297  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:55.381302  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:55 GMT
	I0810 22:37:55.381306  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:55.381310  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:55.381401  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:55.876115  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:55.876144  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:55.876150  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:55.876155  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:55.880610  411387 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:37:55.880634  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:55.880640  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:55.880644  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:55.880647  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:55.880651  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:55.880654  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:55 GMT
	I0810 22:37:55.880806  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:55.881189  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:55.881203  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:55.881208  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:55.881212  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:55.883000  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:55.883022  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:55.883032  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:55.883037  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:55.883042  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:55.883047  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:55.883051  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:55 GMT
	I0810 22:37:55.883169  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:56.376809  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:56.376835  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:56.376841  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:56.376846  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:56.379447  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:56.379469  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:56.379476  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:56 GMT
	I0810 22:37:56.379482  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:56.379486  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:56.379491  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:56.379495  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:56.379611  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:56.380040  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:56.380056  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:56.380129  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:56.380141  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:56.381926  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:56.381946  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:56.381953  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:56.381970  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:56.381974  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:56.381978  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:56 GMT
	I0810 22:37:56.381982  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:56.382122  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:56.876631  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:56.876661  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:56.876668  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:56.876672  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:56.879048  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:56.879072  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:56.879079  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:56.879082  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:56.879086  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:56.879089  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:56.879093  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:56 GMT
	I0810 22:37:56.879188  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:56.879525  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:56.879547  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:56.879552  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:56.879557  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:56.881251  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:56.881268  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:56.881274  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:56.881280  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:56.881286  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:56.881290  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:56 GMT
	I0810 22:37:56.881293  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:56.881429  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:56.881684  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:57.376033  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:57.376060  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:57.376067  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:57.376071  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:57.378446  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:57.378466  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:57.378472  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:57 GMT
	I0810 22:37:57.378475  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:57.378478  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:57.378481  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:57.378483  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:57.378612  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:57.378951  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:57.378964  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:57.378969  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:57.378974  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:57.380712  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:57.380734  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:57.380745  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:57.380750  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:57.380755  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:57.380760  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:57 GMT
	I0810 22:37:57.380764  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:57.380864  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:57.876413  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:57.876444  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:57.876453  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:57.876459  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:57.878867  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:57.878888  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:57.878897  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:57.878900  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:57.878904  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:57.878912  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:57.878916  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:57 GMT
	I0810 22:37:57.879058  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:57.879398  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:57.879412  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:57.879417  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:57.879421  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:57.880993  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:57.881010  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:57.881017  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:57.881022  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:57.881026  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:57.881030  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:57.881034  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:57 GMT
	I0810 22:37:57.883069  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:58.376809  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:58.376835  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:58.376842  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:58.376846  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:58.379431  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:58.379452  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:58.379458  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:58.379463  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:58 GMT
	I0810 22:37:58.379467  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:58.379471  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:58.379476  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:58.379604  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:58.380020  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:58.380036  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:58.380043  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:58.380049  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:58.381759  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:58.381775  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:58.381779  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:58.381783  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:58.381786  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:58 GMT
	I0810 22:37:58.381789  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:58.381792  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:58.381878  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:58.876511  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:58.876542  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:58.876602  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:58.876660  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:58.878971  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:58.878992  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:58.878998  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:58.879004  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:58.879008  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:58.879012  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:58.879016  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:58 GMT
	I0810 22:37:58.879218  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:58.879574  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:58.879590  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:58.879597  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:58.879602  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:58.881430  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:58.881450  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:58.881457  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:58.881462  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:58.881467  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:58 GMT
	I0810 22:37:58.881472  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:58.881476  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:58.881569  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:58.881828  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:37:59.376110  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:59.376141  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:59.376147  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:59.376151  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:59.378698  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:59.378719  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:59.378725  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:59.378729  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:59.378732  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:59 GMT
	I0810 22:37:59.378736  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:59.378739  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:59.378881  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:59.379253  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:59.379269  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:59.379274  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:59.379278  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:59.380944  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:59.380960  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:59.380965  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:59.380969  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:59.380972  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:59.380975  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:59.380978  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:59 GMT
	I0810 22:37:59.381096  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:37:59.876779  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:37:59.876803  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:59.876809  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:59.876813  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:59.879332  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:37:59.879355  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:59.879361  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:59.879365  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:59.879368  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:59.879371  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:59.879374  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:59 GMT
	I0810 22:37:59.879500  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:37:59.879947  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:37:59.879972  411387 round_trippers.go:438] Request Headers:
	I0810 22:37:59.879980  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:37:59.879987  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:37:59.881999  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:37:59.882019  411387 round_trippers.go:460] Response Headers:
	I0810 22:37:59.882025  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:37:59.882029  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:37:59.882033  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:37:59 GMT
	I0810 22:37:59.882038  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:37:59.882045  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:37:59.882171  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:00.375945  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:38:00.375971  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:00.375977  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:00.375981  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:00.378278  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:00.378303  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:00.378309  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:00.378314  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:00.378318  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:00.378323  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:00.378329  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:00 GMT
	I0810 22:38:00.378426  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:38:00.378795  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:00.378809  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:00.378814  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:00.378818  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:00.380417  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:00.380432  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:00.380438  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:00.380443  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:00.380449  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:00.380453  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:00.380458  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:00 GMT
	I0810 22:38:00.380588  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:00.876225  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:38:00.876251  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:00.876258  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:00.876262  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:00.961680  411387 round_trippers.go:457] Response Status: 200 OK in 85 milliseconds
	I0810 22:38:00.961708  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:00.961714  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:00.961718  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:00 GMT
	I0810 22:38:00.961721  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:00.961725  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:00.961728  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:00.961884  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"445","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0810 22:38:00.962271  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:00.962287  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:00.962292  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:00.962297  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:00.964026  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:00.964048  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:00.964055  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:00.964059  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:00.964063  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:00.964068  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:00.964076  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:00 GMT
	I0810 22:38:00.964258  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:00.964510  411387 pod_ready.go:102] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"False"
	I0810 22:38:01.376802  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:38:01.376833  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.376842  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.376848  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.379386  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:01.379413  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.379420  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.379424  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.379428  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.379431  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.379435  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.379575  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"528","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5736 chars]
	I0810 22:38:01.379957  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:01.379973  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.379978  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.379982  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.381810  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:01.381829  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.381834  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.381837  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.381842  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.381847  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.381851  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.381992  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:01.382284  411387 pod_ready.go:92] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:01.382307  411387 pod_ready.go:81] duration metric: took 50.517217922s waiting for pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.382319  411387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-wmrkg" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.382383  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-wmrkg
	I0810 22:38:01.382395  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.382402  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.382408  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.383868  411387 round_trippers.go:457] Response Status: 404 Not Found in 1 milliseconds
	I0810 22:38:01.383893  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.383899  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.383904  411387 round_trippers.go:463]     Content-Length: 216
	I0810 22:38:01.383909  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.383913  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.383919  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.383924  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.383940  411387 request.go:1123] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-558bd4d5db-wmrkg\" not found","reason":"NotFound","details":{"name":"coredns-558bd4d5db-wmrkg","kind":"pods"},"code":404}
	I0810 22:38:01.384338  411387 pod_ready.go:97] error getting pod "coredns-558bd4d5db-wmrkg" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-wmrkg" not found
	I0810 22:38:01.384354  411387 pod_ready.go:81] duration metric: took 2.023821ms waiting for pod "coredns-558bd4d5db-wmrkg" in "kube-system" namespace to be "Ready" ...
	E0810 22:38:01.384362  411387 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-wmrkg" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-wmrkg" not found
	I0810 22:38:01.384409  411387 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.384455  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223625-345780
	I0810 22:38:01.384463  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.384470  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.384474  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.386285  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:01.386303  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.386309  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.386313  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.386318  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.386322  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.386326  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.386412  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223625-345780","namespace":"kube-system","uid":"cf0c44d7-8ffd-488d-9df9-5e1525664f05","resourceVersion":"285","creationTimestamp":"2021-08-10T22:36:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"8d837e7f6a5c73c311006df3eb1878eb","kubernetes.io/config.mirror":"8d837e7f6a5c73c311006df3eb1878eb","kubernetes.io/config.seen":"2021-08-10T22:36:56.517157696Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.has [truncated 5564 chars]
	I0810 22:38:01.386732  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:01.386749  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.386755  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.386762  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.388213  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:01.388232  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.388238  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.388243  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.388248  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.388252  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.388256  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.388356  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:01.388585  411387 pod_ready.go:92] pod "etcd-multinode-20210810223625-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:01.388608  411387 pod_ready.go:81] duration metric: took 4.192101ms waiting for pod "etcd-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.388619  411387 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.388667  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210810223625-345780
	I0810 22:38:01.388674  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.388679  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.388684  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.390193  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:01.390221  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.390228  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.390234  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.390238  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.390244  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.390249  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.390393  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210810223625-345780","namespace":"kube-system","uid":"db837571-f437-4487-bf1b-2fcd95f2792f","resourceVersion":"293","creationTimestamp":"2021-08-10T22:36:50Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"0ec6379c2d3124d4e5d783fbbe51e0a9","kubernetes.io/config.mirror":"0ec6379c2d3124d4e5d783fbbe51e0a9","kubernetes.io/config.seen":"2021-08-10T22:36:42.168375581Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addres [truncated 8091 chars]
	I0810 22:38:01.390708  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:01.390722  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.390727  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.390731  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.392188  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:01.392206  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.392213  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.392217  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.392222  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.392226  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.392231  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.392300  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:01.392521  411387 pod_ready.go:92] pod "kube-apiserver-multinode-20210810223625-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:01.392537  411387 pod_ready.go:81] duration metric: took 3.909089ms waiting for pod "kube-apiserver-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.392546  411387 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.392591  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210810223625-345780
	I0810 22:38:01.392600  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.392605  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.392610  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.394254  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:01.394269  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.394273  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.394277  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.394280  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.394283  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.394286  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.394375  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210810223625-345780","namespace":"kube-system","uid":"ca03e430-c2fe-4342-bf58-8881dc7681e6","resourceVersion":"289","creationTimestamp":"2021-08-10T22:36:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5928e8fa0e03d8e9cfa8d0d54904a9b2","kubernetes.io/config.mirror":"5928e8fa0e03d8e9cfa8d0d54904a9b2","kubernetes.io/config.seen":"2021-08-10T22:36:56.517180929Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7657 chars]
	I0810 22:38:01.394650  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:01.394662  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.394666  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.394670  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.396290  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:01.396307  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.396312  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.396315  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.396318  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.396321  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.396324  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.396902  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:01.397394  411387 pod_ready.go:92] pod "kube-controller-manager-multinode-20210810223625-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:01.397412  411387 pod_ready.go:81] duration metric: took 4.859271ms waiting for pod "kube-controller-manager-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.397444  411387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mjpnd" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.397509  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjpnd
	I0810 22:38:01.397516  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.397523  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.397529  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.399758  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:01.399777  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.399783  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.399788  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.399793  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.399798  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.399803  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.399913  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjpnd","generateName":"kube-proxy-","namespace":"kube-system","uid":"faf63065-64c1-40bf-a45f-9f974c5a950a","resourceVersion":"481","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c3a997cc-f437-4a94-8731-52c9d831f23a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3a997cc-f437-4a94-8731-52c9d831f23a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5756 chars]
	I0810 22:38:01.577068  411387 request.go:600] Waited for 176.829596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:01.577131  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:01.577136  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.577143  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.577147  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.579443  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:01.579470  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.579476  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.579480  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.579483  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.579487  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.579491  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.579604  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:01.579892  411387 pod_ready.go:92] pod "kube-proxy-mjpnd" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:01.579905  411387 pod_ready.go:81] duration metric: took 182.453735ms waiting for pod "kube-proxy-mjpnd" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.579915  411387 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.777404  411387 request.go:600] Waited for 197.413329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210810223625-345780
	I0810 22:38:01.777469  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210810223625-345780
	I0810 22:38:01.777475  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.777483  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.777491  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.779940  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:01.779968  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.779976  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.779982  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.779987  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.779992  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.779997  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.780112  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210810223625-345780","namespace":"kube-system","uid":"42c8e1f7-7601-46c4-a4ed-41453ba322ed","resourceVersion":"300","creationTimestamp":"2021-08-10T22:36:50Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fa1e47c19fa2d5c7ea26f213a01edf2a","kubernetes.io/config.mirror":"fa1e47c19fa2d5c7ea26f213a01edf2a","kubernetes.io/config.seen":"2021-08-10T22:36:42.168377963Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4539 chars]
	I0810 22:38:01.977779  411387 request.go:600] Waited for 197.366888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:01.977860  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:01.977869  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:01.977876  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:01.977883  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:01.980415  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:01.980438  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:01.980444  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:01.980449  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:01 GMT
	I0810 22:38:01.980453  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:01.980456  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:01.980459  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:01.980569  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:01.980848  411387 pod_ready.go:92] pod "kube-scheduler-multinode-20210810223625-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:01.980859  411387 pod_ready.go:81] duration metric: took 400.936982ms waiting for pod "kube-scheduler-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:01.980867  411387 pod_ready.go:38] duration metric: took 51.129268041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:38:01.980886  411387 api_server.go:50] waiting for apiserver process to appear ...
	I0810 22:38:01.980981  411387 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:38:02.002383  411387 command_runner.go:124] > 1339
	I0810 22:38:02.002450  411387 api_server.go:70] duration metric: took 51.240543511s to wait for apiserver process to appear ...
	I0810 22:38:02.002467  411387 api_server.go:86] waiting for apiserver healthz status ...
	I0810 22:38:02.002477  411387 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0810 22:38:02.010202  411387 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0810 22:38:02.010295  411387 round_trippers.go:432] GET https://192.168.49.2:8443/version?timeout=32s
	I0810 22:38:02.010307  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:02.010315  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:02.010324  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:02.011009  411387 round_trippers.go:457] Response Status: 200 OK in 0 milliseconds
	I0810 22:38:02.011023  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:02.011027  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:02.011030  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:02.011034  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:02.011037  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:02.011040  411387 round_trippers.go:463]     Content-Length: 263
	I0810 22:38:02.011043  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:02 GMT
	I0810 22:38:02.011068  411387 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0810 22:38:02.011241  411387 api_server.go:139] control plane version: v1.21.3
	I0810 22:38:02.011260  411387 api_server.go:129] duration metric: took 8.786794ms to wait for apiserver health ...
	I0810 22:38:02.011270  411387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0810 22:38:02.177683  411387 request.go:600] Waited for 166.319186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0810 22:38:02.177755  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0810 22:38:02.177761  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:02.177766  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:02.177771  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:02.181433  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:38:02.181460  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:02.181469  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:02.181475  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:02.181480  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:02.181485  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:02.181494  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:02 GMT
	I0810 22:38:02.181904  411387 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"528","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 54528 chars]
	I0810 22:38:02.183578  411387 system_pods.go:59] 8 kube-system pods found
	I0810 22:38:02.183608  411387 system_pods.go:61] "coredns-558bd4d5db-brf4l" [827b85ba-dadb-4a2a-baa7-557371796646] Running
	I0810 22:38:02.183615  411387 system_pods.go:61] "etcd-multinode-20210810223625-345780" [cf0c44d7-8ffd-488d-9df9-5e1525664f05] Running
	I0810 22:38:02.183623  411387 system_pods.go:61] "kindnet-v8dtb" [0b1964d8-6ee9-4dec-bba2-d4f6a9f38463] Running
	I0810 22:38:02.183629  411387 system_pods.go:61] "kube-apiserver-multinode-20210810223625-345780" [db837571-f437-4487-bf1b-2fcd95f2792f] Running
	I0810 22:38:02.183636  411387 system_pods.go:61] "kube-controller-manager-multinode-20210810223625-345780" [ca03e430-c2fe-4342-bf58-8881dc7681e6] Running
	I0810 22:38:02.183649  411387 system_pods.go:61] "kube-proxy-mjpnd" [faf63065-64c1-40bf-a45f-9f974c5a950a] Running
	I0810 22:38:02.183653  411387 system_pods.go:61] "kube-scheduler-multinode-20210810223625-345780" [42c8e1f7-7601-46c4-a4ed-41453ba322ed] Running
	I0810 22:38:02.183656  411387 system_pods.go:61] "storage-provisioner" [94400cf8-9fbe-457d-99d8-78eb282c11cb] Running
	I0810 22:38:02.183662  411387 system_pods.go:74] duration metric: took 172.383249ms to wait for pod list to return data ...
	I0810 22:38:02.183672  411387 default_sa.go:34] waiting for default service account to be created ...
	I0810 22:38:02.377124  411387 request.go:600] Waited for 193.37203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0810 22:38:02.377203  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0810 22:38:02.377215  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:02.377226  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:02.377234  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:02.379665  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:02.379684  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:02.379688  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:02.379692  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:02.379695  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:02.379698  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:02.379704  411387 round_trippers.go:463]     Content-Length: 304
	I0810 22:38:02.379708  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:02 GMT
	I0810 22:38:02.379728  411387 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"43615b58-b566-42ff-a000-393845f25ebc","resourceVersion":"406","creationTimestamp":"2021-08-10T22:37:10Z"},"secrets":[{"name":"default-token-9984d"}]}]}
	I0810 22:38:02.380287  411387 default_sa.go:45] found service account: "default"
	I0810 22:38:02.380305  411387 default_sa.go:55] duration metric: took 196.626641ms for default service account to be created ...
	I0810 22:38:02.380314  411387 system_pods.go:116] waiting for k8s-apps to be running ...
	I0810 22:38:02.577793  411387 request.go:600] Waited for 197.386235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0810 22:38:02.577863  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0810 22:38:02.577878  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:02.577889  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:02.577898  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:02.581622  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:38:02.581646  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:02.581657  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:02.581661  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:02.581664  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:02.581668  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:02 GMT
	I0810 22:38:02.581671  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:02.582105  411387 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"528","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 54528 chars]
	I0810 22:38:02.583351  411387 system_pods.go:86] 8 kube-system pods found
	I0810 22:38:02.583373  411387 system_pods.go:89] "coredns-558bd4d5db-brf4l" [827b85ba-dadb-4a2a-baa7-557371796646] Running
	I0810 22:38:02.583379  411387 system_pods.go:89] "etcd-multinode-20210810223625-345780" [cf0c44d7-8ffd-488d-9df9-5e1525664f05] Running
	I0810 22:38:02.583385  411387 system_pods.go:89] "kindnet-v8dtb" [0b1964d8-6ee9-4dec-bba2-d4f6a9f38463] Running
	I0810 22:38:02.583390  411387 system_pods.go:89] "kube-apiserver-multinode-20210810223625-345780" [db837571-f437-4487-bf1b-2fcd95f2792f] Running
	I0810 22:38:02.583394  411387 system_pods.go:89] "kube-controller-manager-multinode-20210810223625-345780" [ca03e430-c2fe-4342-bf58-8881dc7681e6] Running
	I0810 22:38:02.583401  411387 system_pods.go:89] "kube-proxy-mjpnd" [faf63065-64c1-40bf-a45f-9f974c5a950a] Running
	I0810 22:38:02.583405  411387 system_pods.go:89] "kube-scheduler-multinode-20210810223625-345780" [42c8e1f7-7601-46c4-a4ed-41453ba322ed] Running
	I0810 22:38:02.583410  411387 system_pods.go:89] "storage-provisioner" [94400cf8-9fbe-457d-99d8-78eb282c11cb] Running
	I0810 22:38:02.583416  411387 system_pods.go:126] duration metric: took 203.097057ms to wait for k8s-apps to be running ...
	I0810 22:38:02.583453  411387 system_svc.go:44] waiting for kubelet service to be running ....
	I0810 22:38:02.583499  411387 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:38:02.593268  411387 system_svc.go:56] duration metric: took 9.805331ms WaitForService to wait for kubelet.
	I0810 22:38:02.593296  411387 kubeadm.go:547] duration metric: took 51.831390469s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0810 22:38:02.593328  411387 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:38:02.777781  411387 request.go:600] Waited for 184.346645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0810 22:38:02.777862  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0810 22:38:02.777894  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:02.777906  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:02.777915  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:02.780177  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:02.780198  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:02.780208  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:02.780213  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:02.780217  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:02 GMT
	I0810 22:38:02.780222  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:02.780225  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:02.780386  411387 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-mana
ged-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","opera [truncated 6657 chars]
	I0810 22:38:02.781500  411387 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0810 22:38:02.781558  411387 node_conditions.go:123] node cpu capacity is 8
	I0810 22:38:02.781574  411387 node_conditions.go:105] duration metric: took 188.241277ms to run NodePressure ...
	I0810 22:38:02.781586  411387 start.go:231] waiting for startup goroutines ...
	I0810 22:38:02.784006  411387 out.go:177] 
	I0810 22:38:02.784278  411387 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/config.json ...
	I0810 22:38:02.786348  411387 out.go:177] * Starting node multinode-20210810223625-345780-m02 in cluster multinode-20210810223625-345780
	I0810 22:38:02.786382  411387 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:38:02.787963  411387 out.go:177] * Pulling base image ...
	I0810 22:38:02.787998  411387 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:38:02.788007  411387 cache.go:56] Caching tarball of preloaded images
	I0810 22:38:02.788093  411387 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0810 22:38:02.788134  411387 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 22:38:02.788152  411387 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0810 22:38:02.788241  411387 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/config.json ...
	I0810 22:38:02.879771  411387 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:38:02.879810  411387 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:38:02.879831  411387 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:38:02.879878  411387 start.go:313] acquiring machines lock for multinode-20210810223625-345780-m02: {Name:mk962498c4ae541d1dc712dc6e1bb668945dc0f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:38:02.880060  411387 start.go:317] acquired machines lock for "multinode-20210810223625-345780-m02" in 152.945µs
	I0810 22:38:02.880097  411387 start.go:89] Provisioning new machine with config: &{Name:multinode-20210810223625-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223625-345780 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0810 22:38:02.880215  411387 start.go:126] createHost starting for "m02" (driver="docker")
	I0810 22:38:02.883015  411387 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0810 22:38:02.883184  411387 start.go:160] libmachine.API.Create for "multinode-20210810223625-345780" (driver="docker")
	I0810 22:38:02.883215  411387 client.go:168] LocalClient.Create starting
	I0810 22:38:02.883316  411387 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 22:38:02.883374  411387 main.go:130] libmachine: Decoding PEM data...
	I0810 22:38:02.883395  411387 main.go:130] libmachine: Parsing certificate...
	I0810 22:38:02.883500  411387 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 22:38:02.883517  411387 main.go:130] libmachine: Decoding PEM data...
	I0810 22:38:02.883532  411387 main.go:130] libmachine: Parsing certificate...
	I0810 22:38:02.883777  411387 cli_runner.go:115] Run: docker network inspect multinode-20210810223625-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 22:38:02.921868  411387 network_create.go:67] Found existing network {name:multinode-20210810223625-345780 subnet:0xc0006668a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0810 22:38:02.921910  411387 kic.go:106] calculated static IP "192.168.49.3" for the "multinode-20210810223625-345780-m02" container
	I0810 22:38:02.921971  411387 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0810 22:38:02.958455  411387 cli_runner.go:115] Run: docker volume create multinode-20210810223625-345780-m02 --label name.minikube.sigs.k8s.io=multinode-20210810223625-345780-m02 --label created_by.minikube.sigs.k8s.io=true
	I0810 22:38:02.996469  411387 oci.go:102] Successfully created a docker volume multinode-20210810223625-345780-m02
	I0810 22:38:02.996551  411387 cli_runner.go:115] Run: docker run --rm --name multinode-20210810223625-345780-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210810223625-345780-m02 --entrypoint /usr/bin/test -v multinode-20210810223625-345780-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0810 22:38:03.721199  411387 oci.go:106] Successfully prepared a docker volume multinode-20210810223625-345780-m02
	W0810 22:38:03.721263  411387 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0810 22:38:03.721276  411387 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0810 22:38:03.721336  411387 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:38:03.721395  411387 kic.go:179] Starting extracting preloaded images to volume ...
	I0810 22:38:03.721447  411387 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210810223625-345780-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0810 22:38:03.721347  411387 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0810 22:38:03.805275  411387 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210810223625-345780-m02 --name multinode-20210810223625-345780-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210810223625-345780-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210810223625-345780-m02 --network multinode-20210810223625-345780 --ip 192.168.49.3 --volume multinode-20210810223625-345780-m02:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0810 22:38:04.368834  411387 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780-m02 --format={{.State.Running}}
	I0810 22:38:04.415628  411387 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780-m02 --format={{.State.Status}}
	I0810 22:38:04.462034  411387 cli_runner.go:115] Run: docker exec multinode-20210810223625-345780-m02 stat /var/lib/dpkg/alternatives/iptables
	I0810 22:38:04.598589  411387 oci.go:278] the created container "multinode-20210810223625-345780-m02" has a running status.
	I0810 22:38:04.598640  411387 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780-m02/id_rsa...
	I0810 22:38:04.853815  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0810 22:38:04.853866  411387 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0810 22:38:05.259335  411387 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780-m02 --format={{.State.Status}}
	I0810 22:38:05.300300  411387 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0810 22:38:05.300321  411387 kic_runner.go:115] Args: [docker exec --privileged multinode-20210810223625-345780-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0810 22:38:07.408630  411387 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210810223625-345780-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (3.687124778s)
	I0810 22:38:07.408666  411387 kic.go:188] duration metric: took 3.687269 seconds to extract preloaded images to volume
	I0810 22:38:07.408754  411387 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780-m02 --format={{.State.Status}}
	I0810 22:38:07.448868  411387 machine.go:88] provisioning docker machine ...
	I0810 22:38:07.448907  411387 ubuntu.go:169] provisioning hostname "multinode-20210810223625-345780-m02"
	I0810 22:38:07.448987  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780-m02
	I0810 22:38:07.488138  411387 main.go:130] libmachine: Using SSH client type: native
	I0810 22:38:07.488306  411387 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0810 22:38:07.488320  411387 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210810223625-345780-m02 && echo "multinode-20210810223625-345780-m02" | sudo tee /etc/hostname
	I0810 22:38:07.614376  411387 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210810223625-345780-m02
	
	I0810 22:38:07.614481  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780-m02
	I0810 22:38:07.654364  411387 main.go:130] libmachine: Using SSH client type: native
	I0810 22:38:07.654561  411387 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0810 22:38:07.654591  411387 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210810223625-345780-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210810223625-345780-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210810223625-345780-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:38:07.768832  411387 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:38:07.768870  411387 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:38:07.768896  411387 ubuntu.go:177] setting up certificates
	I0810 22:38:07.768907  411387 provision.go:83] configureAuth start
	I0810 22:38:07.768990  411387 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210810223625-345780-m02
	I0810 22:38:07.808097  411387 provision.go:137] copyHostCerts
	I0810 22:38:07.808140  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:38:07.808174  411387 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:38:07.808186  411387 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:38:07.808241  411387 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:38:07.808641  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:38:07.808694  411387 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:38:07.808703  411387 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:38:07.808745  411387 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:38:07.808806  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:38:07.808829  411387 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:38:07.808842  411387 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:38:07.808866  411387 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:38:07.808914  411387 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.multinode-20210810223625-345780-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210810223625-345780-m02]
	I0810 22:38:08.007269  411387 provision.go:171] copyRemoteCerts
	I0810 22:38:08.007338  411387 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:38:08.007382  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780-m02
	I0810 22:38:08.047722  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780-m02/id_rsa Username:docker}
	I0810 22:38:08.132365  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0810 22:38:08.132431  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:38:08.148892  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0810 22:38:08.148960  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0810 22:38:08.165421  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0810 22:38:08.165474  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0810 22:38:08.182398  411387 provision.go:86] duration metric: configureAuth took 413.474649ms
	I0810 22:38:08.182429  411387 ubuntu.go:193] setting minikube options for container-runtime
	I0810 22:38:08.182720  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780-m02
	I0810 22:38:08.222142  411387 main.go:130] libmachine: Using SSH client type: native
	I0810 22:38:08.222320  411387 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33052 <nil> <nil>}
	I0810 22:38:08.222341  411387 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:38:08.577910  411387 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:38:08.577948  411387 machine.go:91] provisioned docker machine in 1.129055201s
	I0810 22:38:08.577960  411387 client.go:171] LocalClient.Create took 5.694735413s
	I0810 22:38:08.577971  411387 start.go:168] duration metric: libmachine.API.Create for "multinode-20210810223625-345780" took 5.694788737s
	I0810 22:38:08.577987  411387 start.go:267] post-start starting for "multinode-20210810223625-345780-m02" (driver="docker")
	I0810 22:38:08.577995  411387 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:38:08.578059  411387 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:38:08.578109  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780-m02
	I0810 22:38:08.617104  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780-m02/id_rsa Username:docker}
	I0810 22:38:08.704321  411387 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:38:08.706985  411387 command_runner.go:124] > NAME="Ubuntu"
	I0810 22:38:08.707010  411387 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0810 22:38:08.707016  411387 command_runner.go:124] > ID=ubuntu
	I0810 22:38:08.707023  411387 command_runner.go:124] > ID_LIKE=debian
	I0810 22:38:08.707029  411387 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0810 22:38:08.707035  411387 command_runner.go:124] > VERSION_ID="20.04"
	I0810 22:38:08.707044  411387 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0810 22:38:08.707052  411387 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0810 22:38:08.707062  411387 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0810 22:38:08.707083  411387 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0810 22:38:08.707097  411387 command_runner.go:124] > VERSION_CODENAME=focal
	I0810 22:38:08.707104  411387 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0810 22:38:08.707193  411387 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0810 22:38:08.707213  411387 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0810 22:38:08.707226  411387 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0810 22:38:08.707240  411387 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0810 22:38:08.707256  411387 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:38:08.707308  411387 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:38:08.707400  411387 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> 3457802.pem in /etc/ssl/certs
	I0810 22:38:08.707412  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> /etc/ssl/certs/3457802.pem
	I0810 22:38:08.707535  411387 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:38:08.714212  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:38:08.733852  411387 start.go:270] post-start completed in 155.8453ms
	I0810 22:38:08.734259  411387 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210810223625-345780-m02
	I0810 22:38:08.775282  411387 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/config.json ...
	I0810 22:38:08.775518  411387 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:38:08.775560  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780-m02
	I0810 22:38:08.813969  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780-m02/id_rsa Username:docker}
	I0810 22:38:08.893762  411387 command_runner.go:124] > 29%!
	(MISSING)I0810 22:38:08.893923  411387 start.go:129] duration metric: createHost completed in 6.013689926s
	I0810 22:38:08.893951  411387 start.go:80] releasing machines lock for "multinode-20210810223625-345780-m02", held for 6.013873602s
	I0810 22:38:08.894051  411387 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210810223625-345780-m02
	I0810 22:38:08.936128  411387 out.go:177] * Found network options:
	I0810 22:38:08.937918  411387 out.go:177]   - NO_PROXY=192.168.49.2
	W0810 22:38:08.937964  411387 proxy.go:118] fail to check proxy env: Error ip not in block
	W0810 22:38:08.938017  411387 proxy.go:118] fail to check proxy env: Error ip not in block
	I0810 22:38:08.938107  411387 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:38:08.938159  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780-m02
	I0810 22:38:08.938188  411387 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:38:08.938248  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780-m02
	I0810 22:38:08.978903  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780-m02/id_rsa Username:docker}
	I0810 22:38:08.981800  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780-m02/id_rsa Username:docker}
	I0810 22:38:09.099188  411387 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0810 22:38:09.099231  411387 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0810 22:38:09.099242  411387 command_runner.go:124] > <H1>302 Moved</H1>
	I0810 22:38:09.099250  411387 command_runner.go:124] > The document has moved
	I0810 22:38:09.099260  411387 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0810 22:38:09.099267  411387 command_runner.go:124] > </BODY></HTML>
	I0810 22:38:09.099395  411387 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:38:09.109403  411387 docker.go:153] disabling docker service ...
	I0810 22:38:09.109462  411387 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:38:09.119910  411387 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:38:09.128745  411387 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:38:09.139654  411387 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0810 22:38:09.194888  411387 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:38:09.261322  411387 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0810 22:38:09.261386  411387 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:38:09.270353  411387 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:38:09.282222  411387 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:38:09.282243  411387 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:38:09.282276  411387 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0810 22:38:09.289665  411387 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0810 22:38:09.289692  411387 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0810 22:38:09.297103  411387 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:38:09.302411  411387 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:38:09.302919  411387 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:38:09.302965  411387 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:38:09.309634  411387 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:38:09.315675  411387 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:38:09.371358  411387 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:38:09.380431  411387 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:38:09.380486  411387 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:38:09.383480  411387 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0810 22:38:09.383507  411387 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0810 22:38:09.383517  411387 command_runner.go:124] > Device: aeh/174d	Inode: 2113209     Links: 1
	I0810 22:38:09.383525  411387 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:38:09.383533  411387 command_runner.go:124] > Access: 2021-08-10 22:38:08.568146668 +0000
	I0810 22:38:09.383540  411387 command_runner.go:124] > Modify: 2021-08-10 22:38:08.568146668 +0000
	I0810 22:38:09.383548  411387 command_runner.go:124] > Change: 2021-08-10 22:38:08.568146668 +0000
	I0810 22:38:09.383552  411387 command_runner.go:124] >  Birth: -
	I0810 22:38:09.383580  411387 start.go:417] Will wait 60s for crictl version
	I0810 22:38:09.383614  411387 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:38:09.412647  411387 command_runner.go:124] > Version:  0.1.0
	I0810 22:38:09.412673  411387 command_runner.go:124] > RuntimeName:  cri-o
	I0810 22:38:09.412678  411387 command_runner.go:124] > RuntimeVersion:  1.20.3
	I0810 22:38:09.412684  411387 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0810 22:38:09.412701  411387 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0810 22:38:09.412765  411387 ssh_runner.go:149] Run: crio --version
	I0810 22:38:09.476426  411387 command_runner.go:124] > crio version 1.20.3
	I0810 22:38:09.476462  411387 command_runner.go:124] > Version:       1.20.3
	I0810 22:38:09.476471  411387 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0810 22:38:09.476477  411387 command_runner.go:124] > GitTreeState:  clean
	I0810 22:38:09.476487  411387 command_runner.go:124] > BuildDate:     2021-06-03T20:25:45Z
	I0810 22:38:09.476495  411387 command_runner.go:124] > GoVersion:     go1.15.2
	I0810 22:38:09.476502  411387 command_runner.go:124] > Compiler:      gc
	I0810 22:38:09.476511  411387 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:38:09.476523  411387 command_runner.go:124] > Linkmode:      dynamic
	I0810 22:38:09.477766  411387 command_runner.go:124] ! time="2021-08-10T22:38:09Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0810 22:38:09.477868  411387 ssh_runner.go:149] Run: crio --version
	I0810 22:38:09.540693  411387 command_runner.go:124] > crio version 1.20.3
	I0810 22:38:09.540723  411387 command_runner.go:124] > Version:       1.20.3
	I0810 22:38:09.540734  411387 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0810 22:38:09.540741  411387 command_runner.go:124] > GitTreeState:  clean
	I0810 22:38:09.540750  411387 command_runner.go:124] > BuildDate:     2021-06-03T20:25:45Z
	I0810 22:38:09.540757  411387 command_runner.go:124] > GoVersion:     go1.15.2
	I0810 22:38:09.540764  411387 command_runner.go:124] > Compiler:      gc
	I0810 22:38:09.540772  411387 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:38:09.540782  411387 command_runner.go:124] > Linkmode:      dynamic
	I0810 22:38:09.542070  411387 command_runner.go:124] ! time="2021-08-10T22:38:09Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0810 22:38:09.544985  411387 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0810 22:38:09.546576  411387 out.go:177]   - env NO_PROXY=192.168.49.2
	I0810 22:38:09.546648  411387 cli_runner.go:115] Run: docker network inspect multinode-20210810223625-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 22:38:09.585300  411387 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0810 22:38:09.588874  411387 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:38:09.598256  411387 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780 for IP: 192.168.49.3
	I0810 22:38:09.598327  411387 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:38:09.598347  411387 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:38:09.598360  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0810 22:38:09.598375  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0810 22:38:09.598386  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0810 22:38:09.598397  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0810 22:38:09.598458  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem (1338 bytes)
	W0810 22:38:09.598504  411387 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780_empty.pem, impossibly tiny 0 bytes
	I0810 22:38:09.598533  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0810 22:38:09.598566  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:38:09.598591  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:38:09.598616  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:38:09.598667  411387 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:38:09.598703  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:38:09.598716  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem -> /usr/share/ca-certificates/345780.pem
	I0810 22:38:09.598726  411387 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> /usr/share/ca-certificates/3457802.pem
	I0810 22:38:09.599111  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:38:09.616508  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:38:09.632894  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:38:09.649563  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0810 22:38:09.665657  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:38:09.682272  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem --> /usr/share/ca-certificates/345780.pem (1338 bytes)
	I0810 22:38:09.698760  411387 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /usr/share/ca-certificates/3457802.pem (1708 bytes)
	I0810 22:38:09.715733  411387 ssh_runner.go:149] Run: openssl version
	I0810 22:38:09.720567  411387 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0810 22:38:09.720644  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:38:09.727990  411387 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:38:09.731007  411387 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 10 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:38:09.731063  411387 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:38:09.731107  411387 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:38:09.735723  411387 command_runner.go:124] > b5213941
	I0810 22:38:09.735780  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:38:09.742704  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/345780.pem && ln -fs /usr/share/ca-certificates/345780.pem /etc/ssl/certs/345780.pem"
	I0810 22:38:09.749581  411387 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/345780.pem
	I0810 22:38:09.752375  411387 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 10 22:29 /usr/share/ca-certificates/345780.pem
	I0810 22:38:09.752425  411387 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:29 /usr/share/ca-certificates/345780.pem
	I0810 22:38:09.752461  411387 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/345780.pem
	I0810 22:38:09.756908  411387 command_runner.go:124] > 51391683
	I0810 22:38:09.757129  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/345780.pem /etc/ssl/certs/51391683.0"
	I0810 22:38:09.764045  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3457802.pem && ln -fs /usr/share/ca-certificates/3457802.pem /etc/ssl/certs/3457802.pem"
	I0810 22:38:09.771247  411387 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3457802.pem
	I0810 22:38:09.774189  411387 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 10 22:29 /usr/share/ca-certificates/3457802.pem
	I0810 22:38:09.774266  411387 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:29 /usr/share/ca-certificates/3457802.pem
	I0810 22:38:09.774310  411387 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3457802.pem
	I0810 22:38:09.778935  411387 command_runner.go:124] > 3ec20f2e
	I0810 22:38:09.779144  411387 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3457802.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 22:38:09.786175  411387 ssh_runner.go:149] Run: crio config
	I0810 22:38:09.852371  411387 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0810 22:38:09.852414  411387 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0810 22:38:09.852425  411387 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0810 22:38:09.852430  411387 command_runner.go:124] > #
	I0810 22:38:09.852441  411387 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0810 22:38:09.852455  411387 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0810 22:38:09.852469  411387 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0810 22:38:09.852484  411387 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0810 22:38:09.852494  411387 command_runner.go:124] > # reload'.
	I0810 22:38:09.852508  411387 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0810 22:38:09.852521  411387 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0810 22:38:09.852534  411387 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0810 22:38:09.852546  411387 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0810 22:38:09.852554  411387 command_runner.go:124] > [crio]
	I0810 22:38:09.852563  411387 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0810 22:38:09.852574  411387 command_runner.go:124] > # containers images, in this directory.
	I0810 22:38:09.852584  411387 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0810 22:38:09.852601  411387 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0810 22:38:09.852611  411387 command_runner.go:124] > #runroot = "/run/containers/storage"
	I0810 22:38:09.852638  411387 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0810 22:38:09.852653  411387 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0810 22:38:09.852661  411387 command_runner.go:124] > #storage_driver = "overlay"
	I0810 22:38:09.852670  411387 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0810 22:38:09.852684  411387 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0810 22:38:09.852694  411387 command_runner.go:124] > #storage_option = [
	I0810 22:38:09.852701  411387 command_runner.go:124] > #	"overlay.mountopt=nodev",
	I0810 22:38:09.852706  411387 command_runner.go:124] > #]
	I0810 22:38:09.852717  411387 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0810 22:38:09.852730  411387 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0810 22:38:09.852737  411387 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0810 22:38:09.852747  411387 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0810 22:38:09.852753  411387 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0810 22:38:09.852761  411387 command_runner.go:124] > # always happen on a node reboot
	I0810 22:38:09.852766  411387 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0810 22:38:09.852772  411387 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0810 22:38:09.852778  411387 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0810 22:38:09.852784  411387 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0810 22:38:09.852791  411387 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0810 22:38:09.852799  411387 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0810 22:38:09.852803  411387 command_runner.go:124] > [crio.api]
	I0810 22:38:09.852811  411387 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0810 22:38:09.852819  411387 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0810 22:38:09.852828  411387 command_runner.go:124] > # IP address on which the stream server will listen.
	I0810 22:38:09.852836  411387 command_runner.go:124] > stream_address = "127.0.0.1"
	I0810 22:38:09.852843  411387 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0810 22:38:09.852850  411387 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0810 22:38:09.852854  411387 command_runner.go:124] > stream_port = "0"
	I0810 22:38:09.852860  411387 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0810 22:38:09.852865  411387 command_runner.go:124] > stream_enable_tls = false
	I0810 22:38:09.852872  411387 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0810 22:38:09.852877  411387 command_runner.go:124] > stream_idle_timeout = ""
	I0810 22:38:09.852883  411387 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0810 22:38:09.852890  411387 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0810 22:38:09.852894  411387 command_runner.go:124] > # minutes.
	I0810 22:38:09.852898  411387 command_runner.go:124] > stream_tls_cert = ""
	I0810 22:38:09.852904  411387 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0810 22:38:09.852912  411387 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0810 22:38:09.852939  411387 command_runner.go:124] > stream_tls_key = ""
	I0810 22:38:09.852950  411387 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0810 22:38:09.852958  411387 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0810 22:38:09.852965  411387 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0810 22:38:09.852969  411387 command_runner.go:124] > stream_tls_ca = ""
	I0810 22:38:09.852978  411387 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:38:09.852985  411387 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0810 22:38:09.852992  411387 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:38:09.852998  411387 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0810 22:38:09.853004  411387 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0810 22:38:09.853012  411387 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0810 22:38:09.853016  411387 command_runner.go:124] > [crio.runtime]
	I0810 22:38:09.853022  411387 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0810 22:38:09.853029  411387 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0810 22:38:09.853033  411387 command_runner.go:124] > # "nofile=1024:2048"
	I0810 22:38:09.853039  411387 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0810 22:38:09.853044  411387 command_runner.go:124] > #default_ulimits = [
	I0810 22:38:09.853047  411387 command_runner.go:124] > #]
	I0810 22:38:09.853053  411387 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0810 22:38:09.853094  411387 command_runner.go:124] > no_pivot = false
	I0810 22:38:09.853107  411387 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0810 22:38:09.853123  411387 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0810 22:38:09.853141  411387 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0810 22:38:09.853154  411387 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0810 22:38:09.853160  411387 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0810 22:38:09.853165  411387 command_runner.go:124] > conmon = ""
	I0810 22:38:09.853169  411387 command_runner.go:124] > # Cgroup setting for conmon
	I0810 22:38:09.853175  411387 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0810 22:38:09.853182  411387 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0810 22:38:09.853188  411387 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0810 22:38:09.853192  411387 command_runner.go:124] > conmon_env = [
	I0810 22:38:09.853198  411387 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0810 22:38:09.853202  411387 command_runner.go:124] > ]
	I0810 22:38:09.853208  411387 command_runner.go:124] > # Additional environment variables to set for all the
	I0810 22:38:09.853214  411387 command_runner.go:124] > # containers. These are overridden if set in the
	I0810 22:38:09.853222  411387 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0810 22:38:09.853226  411387 command_runner.go:124] > default_env = [
	I0810 22:38:09.853229  411387 command_runner.go:124] > ]
	I0810 22:38:09.853235  411387 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0810 22:38:09.853240  411387 command_runner.go:124] > selinux = false
	I0810 22:38:09.853248  411387 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0810 22:38:09.853256  411387 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0810 22:38:09.853262  411387 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0810 22:38:09.853266  411387 command_runner.go:124] > seccomp_profile = ""
	I0810 22:38:09.853272  411387 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0810 22:38:09.853278  411387 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0810 22:38:09.853285  411387 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0810 22:38:09.853289  411387 command_runner.go:124] > # which might increase security.
	I0810 22:38:09.853295  411387 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0810 22:38:09.853301  411387 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0810 22:38:09.853308  411387 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0810 22:38:09.853315  411387 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0810 22:38:09.853322  411387 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0810 22:38:09.853329  411387 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:38:09.853333  411387 command_runner.go:124] > apparmor_profile = "crio-default"
	I0810 22:38:09.853339  411387 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0810 22:38:09.853344  411387 command_runner.go:124] > # irqbalance daemon.
	I0810 22:38:09.853350  411387 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0810 22:38:09.853355  411387 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0810 22:38:09.853361  411387 command_runner.go:124] > cgroup_manager = "systemd"
	I0810 22:38:09.853367  411387 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0810 22:38:09.853373  411387 command_runner.go:124] > separate_pull_cgroup = ""
	I0810 22:38:09.853380  411387 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0810 22:38:09.853387  411387 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0810 22:38:09.853391  411387 command_runner.go:124] > # will be added.
	I0810 22:38:09.853395  411387 command_runner.go:124] > default_capabilities = [
	I0810 22:38:09.853398  411387 command_runner.go:124] > 	"CHOWN",
	I0810 22:38:09.853402  411387 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0810 22:38:09.853405  411387 command_runner.go:124] > 	"FSETID",
	I0810 22:38:09.853410  411387 command_runner.go:124] > 	"FOWNER",
	I0810 22:38:09.853414  411387 command_runner.go:124] > 	"SETGID",
	I0810 22:38:09.853417  411387 command_runner.go:124] > 	"SETUID",
	I0810 22:38:09.853421  411387 command_runner.go:124] > 	"SETPCAP",
	I0810 22:38:09.853425  411387 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0810 22:38:09.853428  411387 command_runner.go:124] > 	"KILL",
	I0810 22:38:09.853431  411387 command_runner.go:124] > ]
	I0810 22:38:09.853438  411387 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0810 22:38:09.853444  411387 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:38:09.853448  411387 command_runner.go:124] > default_sysctls = [
	I0810 22:38:09.853451  411387 command_runner.go:124] > ]
	I0810 22:38:09.853457  411387 command_runner.go:124] > # List of additional devices. specified as
	I0810 22:38:09.853465  411387 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0810 22:38:09.853471  411387 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0810 22:38:09.853478  411387 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:38:09.853482  411387 command_runner.go:124] > additional_devices = [
	I0810 22:38:09.853485  411387 command_runner.go:124] > ]
	I0810 22:38:09.853491  411387 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0810 22:38:09.853498  411387 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0810 22:38:09.853502  411387 command_runner.go:124] > hooks_dir = [
	I0810 22:38:09.853506  411387 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0810 22:38:09.853509  411387 command_runner.go:124] > ]
	I0810 22:38:09.853515  411387 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0810 22:38:09.853523  411387 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0810 22:38:09.853528  411387 command_runner.go:124] > # its default mounts from the following two files:
	I0810 22:38:09.853531  411387 command_runner.go:124] > #
	I0810 22:38:09.853538  411387 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0810 22:38:09.853544  411387 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0810 22:38:09.853551  411387 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0810 22:38:09.853554  411387 command_runner.go:124] > #
	I0810 22:38:09.853560  411387 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0810 22:38:09.853568  411387 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0810 22:38:09.853575  411387 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0810 22:38:09.853581  411387 command_runner.go:124] > #      only add mounts it finds in this file.
	I0810 22:38:09.853583  411387 command_runner.go:124] > #
	I0810 22:38:09.853587  411387 command_runner.go:124] > #default_mounts_file = ""
	I0810 22:38:09.853593  411387 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0810 22:38:09.853598  411387 command_runner.go:124] > pids_limit = 1024
	I0810 22:38:09.853605  411387 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0810 22:38:09.853611  411387 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0810 22:38:09.853648  411387 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0810 22:38:09.853659  411387 command_runner.go:124] > # limit is never exceeded.
	I0810 22:38:09.853665  411387 command_runner.go:124] > log_size_max = -1
	I0810 22:38:09.853693  411387 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0810 22:38:09.853703  411387 command_runner.go:124] > log_to_journald = false
	I0810 22:38:09.853715  411387 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0810 22:38:09.853721  411387 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0810 22:38:09.853726  411387 command_runner.go:124] > # Path to directory for container attach sockets.
	I0810 22:38:09.853731  411387 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0810 22:38:09.853739  411387 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0810 22:38:09.853743  411387 command_runner.go:124] > bind_mount_prefix = ""
	I0810 22:38:09.853749  411387 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0810 22:38:09.853754  411387 command_runner.go:124] > read_only = false
	I0810 22:38:09.853761  411387 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0810 22:38:09.853768  411387 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0810 22:38:09.853773  411387 command_runner.go:124] > # live configuration reload.
	I0810 22:38:09.853777  411387 command_runner.go:124] > log_level = "info"
	I0810 22:38:09.853783  411387 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0810 22:38:09.853789  411387 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:38:09.853792  411387 command_runner.go:124] > log_filter = ""
	I0810 22:38:09.853798  411387 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0810 22:38:09.853805  411387 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0810 22:38:09.853836  411387 command_runner.go:124] > # separated by comma.
	I0810 22:38:09.853846  411387 command_runner.go:124] > uid_mappings = ""
	I0810 22:38:09.853853  411387 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0810 22:38:09.853859  411387 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0810 22:38:09.853864  411387 command_runner.go:124] > # separated by comma.
	I0810 22:38:09.853868  411387 command_runner.go:124] > gid_mappings = ""
	I0810 22:38:09.853874  411387 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0810 22:38:09.853882  411387 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0810 22:38:09.853890  411387 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0810 22:38:09.853897  411387 command_runner.go:124] > ctr_stop_timeout = 30
	I0810 22:38:09.853903  411387 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0810 22:38:09.853907  411387 command_runner.go:124] > # and manage their lifecycle.
	I0810 22:38:09.853914  411387 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0810 22:38:09.853920  411387 command_runner.go:124] > manage_ns_lifecycle = true
	I0810 22:38:09.853926  411387 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0810 22:38:09.853934  411387 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0810 22:38:09.853940  411387 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0810 22:38:09.853945  411387 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0810 22:38:09.853950  411387 command_runner.go:124] > drop_infra_ctr = false
	I0810 22:38:09.853956  411387 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0810 22:38:09.853962  411387 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0810 22:38:09.853972  411387 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0810 22:38:09.853976  411387 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0810 22:38:09.853982  411387 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0810 22:38:09.853988  411387 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0810 22:38:09.853992  411387 command_runner.go:124] > namespaces_dir = "/var/run"
	I0810 22:38:09.853999  411387 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0810 22:38:09.854005  411387 command_runner.go:124] > pinns_path = ""
	I0810 22:38:09.854011  411387 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0810 22:38:09.854019  411387 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0810 22:38:09.854025  411387 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0810 22:38:09.854030  411387 command_runner.go:124] > default_runtime = "runc"
	I0810 22:38:09.854036  411387 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0810 22:38:09.854043  411387 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0810 22:38:09.854051  411387 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0810 22:38:09.854057  411387 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0810 22:38:09.854061  411387 command_runner.go:124] > #
	I0810 22:38:09.854065  411387 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0810 22:38:09.854071  411387 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0810 22:38:09.854076  411387 command_runner.go:124] > #  runtime_type = "oci"
	I0810 22:38:09.854081  411387 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0810 22:38:09.854085  411387 command_runner.go:124] > #  privileged_without_host_devices = false
	I0810 22:38:09.854091  411387 command_runner.go:124] > #  allowed_annotations = []
	I0810 22:38:09.854094  411387 command_runner.go:124] > # Where:
	I0810 22:38:09.854099  411387 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0810 22:38:09.854107  411387 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0810 22:38:09.854114  411387 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0810 22:38:09.854124  411387 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0810 22:38:09.854129  411387 command_runner.go:124] > #   in $PATH.
	I0810 22:38:09.854135  411387 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0810 22:38:09.854141  411387 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0810 22:38:09.854147  411387 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0810 22:38:09.854151  411387 command_runner.go:124] > #   state.
	I0810 22:38:09.854157  411387 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0810 22:38:09.854162  411387 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0810 22:38:09.854170  411387 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0810 22:38:09.854206  411387 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0810 22:38:09.854217  411387 command_runner.go:124] > #   The currently recognized values are:
	I0810 22:38:09.854227  411387 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0810 22:38:09.854239  411387 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0810 22:38:09.854249  411387 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0810 22:38:09.854257  411387 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0810 22:38:09.854263  411387 command_runner.go:124] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0810 22:38:09.854267  411387 command_runner.go:124] > runtime_type = "oci"
	I0810 22:38:09.854271  411387 command_runner.go:124] > runtime_root = "/run/runc"
	I0810 22:38:09.854279  411387 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0810 22:38:09.854283  411387 command_runner.go:124] > # running containers
	I0810 22:38:09.854288  411387 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0810 22:38:09.854294  411387 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0810 22:38:09.854306  411387 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0810 22:38:09.854316  411387 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0810 22:38:09.854324  411387 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0810 22:38:09.854333  411387 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0810 22:38:09.854342  411387 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0810 22:38:09.854350  411387 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0810 22:38:09.854356  411387 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0810 22:38:09.854360  411387 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0810 22:38:09.854367  411387 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0810 22:38:09.854371  411387 command_runner.go:124] > #
	I0810 22:38:09.854377  411387 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0810 22:38:09.854384  411387 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0810 22:38:09.854391  411387 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0810 22:38:09.854398  411387 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0810 22:38:09.854405  411387 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0810 22:38:09.854408  411387 command_runner.go:124] > [crio.image]
	I0810 22:38:09.854416  411387 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0810 22:38:09.854423  411387 command_runner.go:124] > default_transport = "docker://"
	I0810 22:38:09.854434  411387 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0810 22:38:09.854444  411387 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:38:09.854450  411387 command_runner.go:124] > global_auth_file = ""
	I0810 22:38:09.854460  411387 command_runner.go:124] > # The image used to instantiate infra containers.
	I0810 22:38:09.854469  411387 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:38:09.854478  411387 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0810 22:38:09.854489  411387 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0810 22:38:09.854502  411387 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:38:09.854509  411387 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:38:09.854514  411387 command_runner.go:124] > pause_image_auth_file = ""
	I0810 22:38:09.854521  411387 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0810 22:38:09.854527  411387 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0810 22:38:09.854539  411387 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0810 22:38:09.854549  411387 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0810 22:38:09.854555  411387 command_runner.go:124] > pause_command = "/pause"
	I0810 22:38:09.854565  411387 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0810 22:38:09.854583  411387 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0810 22:38:09.854597  411387 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0810 22:38:09.854609  411387 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0810 22:38:09.854623  411387 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0810 22:38:09.854634  411387 command_runner.go:124] > signature_policy = ""
	I0810 22:38:09.854644  411387 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0810 22:38:09.854656  411387 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0810 22:38:09.854663  411387 command_runner.go:124] > # changing them here.
	I0810 22:38:09.854671  411387 command_runner.go:124] > #insecure_registries = "[]"
	I0810 22:38:09.854682  411387 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0810 22:38:09.854692  411387 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0810 22:38:09.854697  411387 command_runner.go:124] > image_volumes = "mkdir"
	I0810 22:38:09.854705  411387 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0810 22:38:09.854711  411387 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0810 22:38:09.854719  411387 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0810 22:38:09.854724  411387 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0810 22:38:09.854730  411387 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0810 22:38:09.854733  411387 command_runner.go:124] > #registries = [
	I0810 22:38:09.854736  411387 command_runner.go:124] > # ]
	I0810 22:38:09.854742  411387 command_runner.go:124] > # Temporary directory to use for storing big files
	I0810 22:38:09.854748  411387 command_runner.go:124] > big_files_temporary_dir = ""
	I0810 22:38:09.854754  411387 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0810 22:38:09.854761  411387 command_runner.go:124] > # CNI plugins.
	I0810 22:38:09.854765  411387 command_runner.go:124] > [crio.network]
	I0810 22:38:09.854771  411387 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0810 22:38:09.854776  411387 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0810 22:38:09.854782  411387 command_runner.go:124] > # cni_default_network = "kindnet"
	I0810 22:38:09.854787  411387 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0810 22:38:09.854793  411387 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0810 22:38:09.854803  411387 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0810 22:38:09.854809  411387 command_runner.go:124] > plugin_dirs = [
	I0810 22:38:09.854813  411387 command_runner.go:124] > 	"/opt/cni/bin/",
	I0810 22:38:09.854816  411387 command_runner.go:124] > ]
	I0810 22:38:09.854822  411387 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0810 22:38:09.854827  411387 command_runner.go:124] > [crio.metrics]
	I0810 22:38:09.854832  411387 command_runner.go:124] > # Globally enable or disable metrics support.
	I0810 22:38:09.854836  411387 command_runner.go:124] > enable_metrics = false
	I0810 22:38:09.854843  411387 command_runner.go:124] > # The port on which the metrics server will listen.
	I0810 22:38:09.854847  411387 command_runner.go:124] > metrics_port = 9090
	I0810 22:38:09.854868  411387 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0810 22:38:09.854875  411387 command_runner.go:124] > metrics_socket = ""
	I0810 22:38:09.854914  411387 command_runner.go:124] ! time="2021-08-10T22:38:09Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0810 22:38:09.854929  411387 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0810 22:38:09.854994  411387 cni.go:93] Creating CNI manager for ""
	I0810 22:38:09.855004  411387 cni.go:154] 2 nodes found, recommending kindnet
	I0810 22:38:09.855014  411387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:38:09.855026  411387 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210810223625-345780 NodeName:multinode-20210810223625-345780-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:38:09.855144  411387 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210810223625-345780-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:38:09.855214  411387 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-20210810223625-345780-m02 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223625-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:38:09.855263  411387 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0810 22:38:09.862405  411387 command_runner.go:124] > kubeadm
	I0810 22:38:09.862427  411387 command_runner.go:124] > kubectl
	I0810 22:38:09.862433  411387 command_runner.go:124] > kubelet
	I0810 22:38:09.862458  411387 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 22:38:09.862510  411387 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0810 22:38:09.869747  411387 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (566 bytes)
	I0810 22:38:09.882311  411387 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:38:09.894681  411387 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0810 22:38:09.897704  411387 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:38:09.906644  411387 host.go:66] Checking if "multinode-20210810223625-345780" exists ...
	I0810 22:38:09.906959  411387 start.go:241] JoinCluster: &{Name:multinode-20210810223625-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223625-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0810 22:38:09.907048  411387 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0810 22:38:09.907090  411387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:38:09.946416  411387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa Username:docker}
	I0810 22:38:10.093084  411387 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token l5u5nw.m14yqate2h0mu3yk --discovery-token-ca-cert-hash sha256:95b70b0e3b8140822120816c1284056e6e385d941feb1ffb25a07e039168adfc 
	I0810 22:38:10.095760  411387 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0810 22:38:10.095803  411387 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token l5u5nw.m14yqate2h0mu3yk --discovery-token-ca-cert-hash sha256:95b70b0e3b8140822120816c1284056e6e385d941feb1ffb25a07e039168adfc --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210810223625-345780-m02"
	I0810 22:38:10.207034  411387 command_runner.go:124] ! 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	I0810 22:38:10.210570  411387 command_runner.go:124] ! 	[WARNING SystemVerification]: missing optional cgroups: hugetlb
	I0810 22:38:10.210609  411387 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
	I0810 22:38:10.281240  411387 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0810 22:38:16.406917  411387 command_runner.go:124] > [preflight] Running pre-flight checks
	I0810 22:38:16.406951  411387 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0810 22:38:16.406960  411387 command_runner.go:124] > KERNEL_VERSION: 4.9.0-16-amd64
	I0810 22:38:16.406967  411387 command_runner.go:124] > OS: Linux
	I0810 22:38:16.406975  411387 command_runner.go:124] > CGROUPS_CPU: enabled
	I0810 22:38:16.406984  411387 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0810 22:38:16.406992  411387 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0810 22:38:16.407003  411387 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0810 22:38:16.407015  411387 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0810 22:38:16.407027  411387 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0810 22:38:16.407039  411387 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0810 22:38:16.407051  411387 command_runner.go:124] > CGROUPS_HUGETLB: missing
	I0810 22:38:16.407062  411387 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0810 22:38:16.407079  411387 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0810 22:38:16.407093  411387 command_runner.go:124] > [kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5]
	I0810 22:38:16.407107  411387 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0810 22:38:16.407122  411387 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0810 22:38:16.407136  411387 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0810 22:38:16.407151  411387 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0810 22:38:16.407161  411387 command_runner.go:124] > This node has joined the cluster:
	I0810 22:38:16.407174  411387 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0810 22:38:16.407187  411387 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0810 22:38:16.407200  411387 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0810 22:38:16.407224  411387 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token l5u5nw.m14yqate2h0mu3yk --discovery-token-ca-cert-hash sha256:95b70b0e3b8140822120816c1284056e6e385d941feb1ffb25a07e039168adfc --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210810223625-345780-m02": (6.311407566s)
	I0810 22:38:16.407251  411387 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0810 22:38:16.536239  411387 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0810 22:38:16.536273  411387 start.go:243] JoinCluster complete in 6.629317039s
	I0810 22:38:16.536287  411387 cni.go:93] Creating CNI manager for ""
	I0810 22:38:16.536293  411387 cni.go:154] 2 nodes found, recommending kindnet
	I0810 22:38:16.536334  411387 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0810 22:38:16.539667  411387 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0810 22:38:16.539696  411387 command_runner.go:124] >   Size: 2738488   	Blocks: 5352       IO Block: 4096   regular file
	I0810 22:38:16.539706  411387 command_runner.go:124] > Device: 801h/2049d	Inode: 3807833     Links: 1
	I0810 22:38:16.539714  411387 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:38:16.539725  411387 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0810 22:38:16.539730  411387 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0810 22:38:16.539735  411387 command_runner.go:124] > Change: 2021-07-02 14:50:00.997696388 +0000
	I0810 22:38:16.539739  411387 command_runner.go:124] >  Birth: -
	I0810 22:38:16.539806  411387 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 22:38:16.539821  411387 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0810 22:38:16.552155  411387 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0810 22:38:16.721181  411387 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0810 22:38:16.723191  411387 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0810 22:38:16.725016  411387 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0810 22:38:16.734164  411387 command_runner.go:124] > daemonset.apps/kindnet configured
	I0810 22:38:16.737563  411387 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0810 22:38:16.739724  411387 out.go:177] * Verifying Kubernetes components...
	I0810 22:38:16.739796  411387 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:38:16.749766  411387 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:38:16.750083  411387 kapi.go:59] client config for multinode-20210810223625-345780: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223625-345780/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223
625-345780/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:38:16.751325  411387 node_ready.go:35] waiting up to 6m0s for node "multinode-20210810223625-345780-m02" to be "Ready" ...
	I0810 22:38:16.751409  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:16.751420  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:16.751425  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:16.751429  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:16.753547  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:16.753565  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:16.753572  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:16.753576  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:16.753581  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:16.753586  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:16.753591  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:16 GMT
	I0810 22:38:16.753688  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"567","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5311 chars]
	I0810 22:38:17.254771  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:17.254800  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:17.254806  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:17.254810  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:17.257123  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:17.257145  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:17.257151  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:17.257155  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:17.257159  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:17.257162  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:17.257165  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:17 GMT
	I0810 22:38:17.257286  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"567","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5311 chars]
	I0810 22:38:17.754972  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:17.755006  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:17.755013  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:17.755018  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:17.757495  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:17.757815  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:17.757843  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:17.757864  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:17 GMT
	I0810 22:38:17.757873  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:17.757879  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:17.757885  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:17.758305  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"567","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5311 chars]
	I0810 22:38:18.254170  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:18.254195  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:18.254202  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:18.254206  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:18.256749  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:18.256773  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:18.256779  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:18.256782  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:18.256786  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:18.256789  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:18.256793  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:18 GMT
	I0810 22:38:18.256902  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"567","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5311 chars]
	I0810 22:38:18.754414  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:18.754444  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:18.754451  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:18.754455  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:18.756656  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:18.756682  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:18.756689  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:18.756693  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:18.756697  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:18.756700  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:18 GMT
	I0810 22:38:18.756703  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:18.756827  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"567","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5311 chars]
	I0810 22:38:18.757158  411387 node_ready.go:58] node "multinode-20210810223625-345780-m02" has status "Ready":"False"
	I0810 22:38:19.254551  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:19.254575  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:19.254581  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:19.254586  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:19.257076  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:19.257106  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:19.257115  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:19.257120  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:19.257126  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:19.257130  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:19.257136  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:19 GMT
	I0810 22:38:19.257254  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"567","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5311 chars]
	I0810 22:38:19.754824  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:19.754855  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:19.754861  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:19.754865  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:19.757376  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:19.757405  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:19.757411  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:19.757415  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:19 GMT
	I0810 22:38:19.757418  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:19.757421  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:19.757427  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:19.757508  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"567","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5311 chars]
	I0810 22:38:20.254150  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:20.254181  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:20.254187  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:20.254191  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:20.256458  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:20.256479  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:20.256485  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:20.256493  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:20.256497  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:20.256501  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:20 GMT
	I0810 22:38:20.256507  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:20.256746  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:20.754464  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:20.754500  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:20.754509  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:20.754515  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:20.756913  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:20.756965  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:20.756971  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:20 GMT
	I0810 22:38:20.756974  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:20.756979  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:20.756984  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:20.756988  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:20.757158  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:20.757447  411387 node_ready.go:58] node "multinode-20210810223625-345780-m02" has status "Ready":"False"
	I0810 22:38:21.254667  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:21.254692  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:21.254698  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:21.254702  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:21.257105  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:21.257125  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:21.257133  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:21 GMT
	I0810 22:38:21.257137  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:21.257141  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:21.257145  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:21.257149  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:21.257291  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:21.754979  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:21.755015  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:21.755022  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:21.755027  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:21.757551  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:21.757578  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:21.757586  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:21.757591  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:21 GMT
	I0810 22:38:21.757595  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:21.757600  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:21.757608  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:21.757729  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:22.254259  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:22.254287  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:22.254305  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:22.254310  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:22.256433  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:22.256456  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:22.256464  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:22.256469  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:22.256474  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:22 GMT
	I0810 22:38:22.256479  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:22.256484  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:22.256590  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:22.755169  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:22.755196  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:22.755203  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:22.755208  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:22.757528  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:22.757554  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:22.757562  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:22.757567  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:22 GMT
	I0810 22:38:22.757572  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:22.757576  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:22.757579  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:22.757675  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:22.757915  411387 node_ready.go:58] node "multinode-20210810223625-345780-m02" has status "Ready":"False"
	I0810 22:38:23.254135  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:23.254160  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:23.254166  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:23.254170  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:23.257112  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:23.257139  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:23.257147  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:23.257151  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:23.257154  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:23 GMT
	I0810 22:38:23.257157  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:23.257161  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:23.257320  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:23.754851  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:23.754878  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:23.754895  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:23.754899  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:23.757256  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:23.757278  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:23.757284  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:23.757288  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:23.757292  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:23 GMT
	I0810 22:38:23.757296  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:23.757299  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:23.757466  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:24.255153  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:24.255181  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:24.255187  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:24.255191  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:24.257899  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:24.257926  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:24.257935  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:24.257939  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:24.257943  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:24 GMT
	I0810 22:38:24.257946  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:24.257949  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:24.258040  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:24.754392  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:24.754428  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:24.754436  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:24.754440  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:24.756848  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:24.756875  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:24.756886  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:24.756891  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:24.756896  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:24.756901  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:24.756906  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:24 GMT
	I0810 22:38:24.757057  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:25.254523  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:25.254550  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:25.254556  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:25.254561  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:25.257200  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:25.257226  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:25.257232  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:25.257237  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:25.257241  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:25 GMT
	I0810 22:38:25.257244  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:25.257247  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:25.257424  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:25.257689  411387 node_ready.go:58] node "multinode-20210810223625-345780-m02" has status "Ready":"False"
	I0810 22:38:25.754293  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:25.754323  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:25.754330  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:25.754336  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:25.756618  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:25.756644  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:25.756653  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:25.756658  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:25.756662  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:25.756667  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:25.756671  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:25 GMT
	I0810 22:38:25.756814  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"586","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5420 chars]
	I0810 22:38:26.255194  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:26.255224  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.255231  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.255235  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.257511  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:26.257532  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.257537  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.257541  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.257544  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.257550  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.257553  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.257632  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"594","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 5685 chars]
	I0810 22:38:26.257866  411387 node_ready.go:49] node "multinode-20210810223625-345780-m02" has status "Ready":"True"
	I0810 22:38:26.257886  411387 node_ready.go:38] duration metric: took 9.506539457s waiting for node "multinode-20210810223625-345780-m02" to be "Ready" ...
	I0810 22:38:26.257900  411387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:38:26.257959  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0810 22:38:26.257970  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.257974  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.257978  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.260823  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:26.260848  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.260854  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.260859  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.260864  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.260871  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.260883  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.261353  411387 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"594"},"items":[{"metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"528","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 68348 chars]
	I0810 22:38:26.262908  411387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:26.262975  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-brf4l
	I0810 22:38:26.262985  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.262989  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.262994  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.264666  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:26.264684  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.264700  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.264704  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.264708  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.264713  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.264718  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.264853  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-brf4l","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"827b85ba-dadb-4a2a-baa7-557371796646","resourceVersion":"528","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"b94f584c-f04a-4445-aad8-d9e04572e979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b94f584c-f04a-4445-aad8-d9e04572e979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5736 chars]
	I0810 22:38:26.265295  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:26.265312  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.265317  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.265321  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.268537  411387 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:38:26.268556  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.268562  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.268565  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.268570  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.268574  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.268579  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.268708  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:26.268957  411387 pod_ready.go:92] pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:26.268969  411387 pod_ready.go:81] duration metric: took 6.040099ms waiting for pod "coredns-558bd4d5db-brf4l" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:26.268978  411387 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:26.269022  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223625-345780
	I0810 22:38:26.269031  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.269037  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.269041  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.270540  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:26.270557  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.270564  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.270569  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.270574  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.270579  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.270583  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.270701  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223625-345780","namespace":"kube-system","uid":"cf0c44d7-8ffd-488d-9df9-5e1525664f05","resourceVersion":"285","creationTimestamp":"2021-08-10T22:36:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"8d837e7f6a5c73c311006df3eb1878eb","kubernetes.io/config.mirror":"8d837e7f6a5c73c311006df3eb1878eb","kubernetes.io/config.seen":"2021-08-10T22:36:56.517157696Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.has [truncated 5564 chars]
	I0810 22:38:26.271027  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:26.271042  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.271049  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.271054  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.272472  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:26.272486  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.272493  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.272497  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.272501  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.272505  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.272510  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.272635  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:26.272905  411387 pod_ready.go:92] pod "etcd-multinode-20210810223625-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:26.272940  411387 pod_ready.go:81] duration metric: took 3.933229ms waiting for pod "etcd-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:26.272957  411387 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:26.272998  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210810223625-345780
	I0810 22:38:26.273007  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.273013  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.273020  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.274422  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:26.274437  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.274443  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.274447  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.274451  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.274456  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.274461  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.274550  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210810223625-345780","namespace":"kube-system","uid":"db837571-f437-4487-bf1b-2fcd95f2792f","resourceVersion":"293","creationTimestamp":"2021-08-10T22:36:50Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"0ec6379c2d3124d4e5d783fbbe51e0a9","kubernetes.io/config.mirror":"0ec6379c2d3124d4e5d783fbbe51e0a9","kubernetes.io/config.seen":"2021-08-10T22:36:42.168375581Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addres [truncated 8091 chars]
	I0810 22:38:26.274843  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:26.274857  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.274864  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.274870  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.276208  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:26.276225  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.276242  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.276247  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.276252  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.276257  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.276262  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.276373  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:26.276597  411387 pod_ready.go:92] pod "kube-apiserver-multinode-20210810223625-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:26.276607  411387 pod_ready.go:81] duration metric: took 3.64414ms waiting for pod "kube-apiserver-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:26.276615  411387 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:26.276659  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210810223625-345780
	I0810 22:38:26.276667  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.276671  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.276676  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.278033  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:26.278050  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.278056  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.278065  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.278070  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.278076  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.278081  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.278167  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210810223625-345780","namespace":"kube-system","uid":"ca03e430-c2fe-4342-bf58-8881dc7681e6","resourceVersion":"289","creationTimestamp":"2021-08-10T22:36:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5928e8fa0e03d8e9cfa8d0d54904a9b2","kubernetes.io/config.mirror":"5928e8fa0e03d8e9cfa8d0d54904a9b2","kubernetes.io/config.seen":"2021-08-10T22:36:56.517180929Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7657 chars]
	I0810 22:38:26.278452  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:26.278464  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.278469  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.278473  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.279794  411387 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:38:26.279809  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.279813  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.279817  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.279820  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.279832  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.279836  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.279946  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:26.280140  411387 pod_ready.go:92] pod "kube-controller-manager-multinode-20210810223625-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:26.280149  411387 pod_ready.go:81] duration metric: took 3.528195ms waiting for pod "kube-controller-manager-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:26.280158  411387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fmk5q" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:26.455674  411387 request.go:600] Waited for 175.394726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmk5q
	I0810 22:38:26.455751  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fmk5q
	I0810 22:38:26.455759  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.455769  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.455780  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.458187  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:26.458206  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.458211  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.458215  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.458219  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.458222  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.458226  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.458395  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fmk5q","generateName":"kube-proxy-","namespace":"kube-system","uid":"8c847427-5c0b-458b-add9-66d03949fed9","resourceVersion":"583","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c3a997cc-f437-4a94-8731-52c9d831f23a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3a997cc-f437-4a94-8731-52c9d831f23a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5764 chars]
	I0810 22:38:26.656212  411387 request.go:600] Waited for 197.393607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:26.656293  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780-m02
	I0810 22:38:26.656298  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.656304  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.656308  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.658562  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:26.658585  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.658592  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.658595  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.658599  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.658602  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.658605  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.658722  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780-m02","uid":"e1d385a7-cb9d-49ae-83dd-2b67156b8f55","resourceVersion":"594","creationTimestamp":"2021-08-10T22:38:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 5685 chars]
	I0810 22:38:26.658977  411387 pod_ready.go:92] pod "kube-proxy-fmk5q" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:26.658988  411387 pod_ready.go:81] duration metric: took 378.823984ms waiting for pod "kube-proxy-fmk5q" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:26.658997  411387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mjpnd" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:26.856058  411387 request.go:600] Waited for 196.973379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjpnd
	I0810 22:38:26.856211  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjpnd
	I0810 22:38:26.856226  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:26.856234  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:26.856244  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:26.858512  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:26.858538  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:26.858546  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:26.858552  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:26.858557  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:26.858562  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:26.858567  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:26 GMT
	I0810 22:38:26.858676  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjpnd","generateName":"kube-proxy-","namespace":"kube-system","uid":"faf63065-64c1-40bf-a45f-9f974c5a950a","resourceVersion":"481","creationTimestamp":"2021-08-10T22:37:10Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c3a997cc-f437-4a94-8731-52c9d831f23a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3a997cc-f437-4a94-8731-52c9d831f23a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5756 chars]
	I0810 22:38:27.055313  411387 request.go:600] Waited for 196.277036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:27.055390  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:27.055396  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:27.055401  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:27.055407  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:27.057754  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:27.057775  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:27.057780  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:27.057784  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:27.057787  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:27.057791  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:27.057794  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:27 GMT
	I0810 22:38:27.057977  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:27.058243  411387 pod_ready.go:92] pod "kube-proxy-mjpnd" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:27.058258  411387 pod_ready.go:81] duration metric: took 399.254878ms waiting for pod "kube-proxy-mjpnd" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:27.058271  411387 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:27.255721  411387 request.go:600] Waited for 197.3636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210810223625-345780
	I0810 22:38:27.255814  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210810223625-345780
	I0810 22:38:27.255824  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:27.255832  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:27.255839  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:27.258207  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:27.258244  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:27.258252  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:27.258258  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:27.258263  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:27 GMT
	I0810 22:38:27.258268  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:27.258273  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:27.258391  411387 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210810223625-345780","namespace":"kube-system","uid":"42c8e1f7-7601-46c4-a4ed-41453ba322ed","resourceVersion":"300","creationTimestamp":"2021-08-10T22:36:50Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fa1e47c19fa2d5c7ea26f213a01edf2a","kubernetes.io/config.mirror":"fa1e47c19fa2d5c7ea26f213a01edf2a","kubernetes.io/config.seen":"2021-08-10T22:36:42.168377963Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:37:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4539 chars]
	I0810 22:38:27.456151  411387 request.go:600] Waited for 197.365493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:27.456239  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210810223625-345780
	I0810 22:38:27.456246  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:27.456251  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:27.456255  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:27.458834  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:27.458857  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:27.458865  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:27.458871  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:27.458878  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:27.458882  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:27.458887  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:27 GMT
	I0810 22:38:27.458979  411387 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10 [truncated 6604 chars]
	I0810 22:38:27.459282  411387 pod_ready.go:92] pod "kube-scheduler-multinode-20210810223625-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:38:27.459296  411387 pod_ready.go:81] duration metric: took 401.013725ms waiting for pod "kube-scheduler-multinode-20210810223625-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:38:27.459307  411387 pod_ready.go:38] duration metric: took 1.201394539s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:38:27.459324  411387 system_svc.go:44] waiting for kubelet service to be running ....
	I0810 22:38:27.459371  411387 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:38:27.469341  411387 system_svc.go:56] duration metric: took 10.01055ms WaitForService to wait for kubelet.
	I0810 22:38:27.469363  411387 kubeadm.go:547] duration metric: took 10.731762995s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0810 22:38:27.469386  411387 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:38:27.655825  411387 request.go:600] Waited for 186.332265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0810 22:38:27.655902  411387 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0810 22:38:27.655917  411387 round_trippers.go:438] Request Headers:
	I0810 22:38:27.655926  411387 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:38:27.655935  411387 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:38:27.658334  411387 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:38:27.658352  411387 round_trippers.go:460] Response Headers:
	I0810 22:38:27.658356  411387 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4ccc33cd-9132-43bf-b08e-2c89e0c5f4ee
	I0810 22:38:27.658360  411387 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1828f1-9490-479e-b42d-50e4534e3780
	I0810 22:38:27.658364  411387 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:38:27 GMT
	I0810 22:38:27.658367  411387 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:38:27.658370  411387 round_trippers.go:463]     Content-Type: application/json
	I0810 22:38:27.658535  411387 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"595"},"items":[{"metadata":{"name":"multinode-20210810223625-345780","uid":"f02ef5c1-2e00-43d6-bc7a-a6738e83bfe5","resourceVersion":"410","creationTimestamp":"2021-08-10T22:36:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223625-345780","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223625-345780","minikube.k8s.io/updated_at":"2021_08_10T22_36_51_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-mana
ged-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","opera [truncated 13334 chars]
	I0810 22:38:27.658921  411387 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0810 22:38:27.658937  411387 node_conditions.go:123] node cpu capacity is 8
	I0810 22:38:27.658950  411387 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0810 22:38:27.658954  411387 node_conditions.go:123] node cpu capacity is 8
	I0810 22:38:27.658959  411387 node_conditions.go:105] duration metric: took 189.56737ms to run NodePressure ...
	I0810 22:38:27.658971  411387 start.go:231] waiting for startup goroutines ...
	I0810 22:38:27.702598  411387 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0810 22:38:27.705262  411387 out.go:177] * Done! kubectl is now configured to use "multinode-20210810223625-345780" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Tue 2021-08-10 22:36:27 UTC, end at Tue 2021-08-10 22:38:34 UTC. --
	Aug 10 22:38:00 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:00.080109725Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61 k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:42585056,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a5af2996-5abc-4485-ae42-2ab07ce2e703 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:38:00 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:00.080914006Z" level=info msg="Creating container: kube-system/coredns-558bd4d5db-brf4l/coredns" id=7e275254-6206-4ced-b850-435b41ebf1b6 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 10 22:38:00 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:00.093239613Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c527312415775f78926cd486627cd466cb2d9a3cb0e88fadd91c9cc2f4ea35db/merged/etc/passwd: no such file or directory"
	Aug 10 22:38:00 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:00.093283343Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c527312415775f78926cd486627cd466cb2d9a3cb0e88fadd91c9cc2f4ea35db/merged/etc/group: no such file or directory"
	Aug 10 22:38:00 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:00.229002636Z" level=info msg="Created container e53303f23a1ece4926226c5e8c81453235a78b0c4be104ee16bab555e3e5b721: kube-system/coredns-558bd4d5db-brf4l/coredns" id=7e275254-6206-4ced-b850-435b41ebf1b6 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 10 22:38:00 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:00.229588572Z" level=info msg="Starting container: e53303f23a1ece4926226c5e8c81453235a78b0c4be104ee16bab555e3e5b721" id=63e720ef-bb72-477b-bdae-2b617a0a152f name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 10 22:38:00 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:00.239614694Z" level=info msg="Started container e53303f23a1ece4926226c5e8c81453235a78b0c4be104ee16bab555e3e5b721: kube-system/coredns-558bd4d5db-brf4l/coredns" id=63e720ef-bb72-477b-bdae-2b617a0a152f name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 10 22:38:28 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:28.984059062Z" level=info msg="Running pod sandbox: default/busybox-84b6686758-h8c2g/POD" id=ba6e424c-9c9c-400e-9544-ead00e8e378b name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 10 22:38:28 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:28.998680818Z" level=info msg="Got pod network &{Name:busybox-84b6686758-h8c2g Namespace:default ID:a12294be8d1d849c4fbd3aa6d101b55454d5c5cd87c79a4cf5819f3cebc26b59 NetNS:/var/run/netns/78b39061-29f1-4ea1-8ee1-c2bab8744c06 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 10 22:38:28 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:28.998716138Z" level=info msg="About to add CNI network kindnet (type=ptp)"
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.053690898Z" level=info msg="Got pod network &{Name:busybox-84b6686758-h8c2g Namespace:default ID:a12294be8d1d849c4fbd3aa6d101b55454d5c5cd87c79a4cf5819f3cebc26b59 NetNS:/var/run/netns/78b39061-29f1-4ea1-8ee1-c2bab8744c06 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.053822207Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.178490537Z" level=info msg="Ran pod sandbox a12294be8d1d849c4fbd3aa6d101b55454d5c5cd87c79a4cf5819f3cebc26b59 with infra container: default/busybox-84b6686758-h8c2g/POD" id=ba6e424c-9c9c-400e-9544-ead00e8e378b name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.179606513Z" level=info msg="Checking image status: busybox:1.28" id=decf4e6d-2231-4763-9681-69008ac6267e name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.180245308Z" level=info msg="Image busybox:1.28 not found" id=decf4e6d-2231-4763-9681-69008ac6267e name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.181036099Z" level=info msg="Pulling image: busybox:1.28" id=bfff20cf-235b-4a38-8100-80317cec9ef5 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.187655308Z" level=info msg="Trying to access \"docker.io/library/busybox:1.28\""
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.350509791Z" level=info msg="Trying to access \"docker.io/library/busybox:1.28\""
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.989682028Z" level=info msg="Pulled image: docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47" id=bfff20cf-235b-4a38-8100-80317cec9ef5 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.990425634Z" level=info msg="Checking image status: busybox:1.28" id=40e614f9-f029-4f39-b85c-a4fc3213caf7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.990926277Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[docker.io/library/busybox:1.28],RepoDigests:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335],Size_:1365634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=40e614f9-f029-4f39-b85c-a4fc3213caf7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:38:29 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:29.991698042Z" level=info msg="Creating container: default/busybox-84b6686758-h8c2g/busybox" id=350fc072-6b05-4cb0-9939-af04f5278407 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 10 22:38:30 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:30.139131563Z" level=info msg="Created container d9f7ccd2e3c146a048cd63b5489647c64c388872e7897975fa98bd88eb3ab30a: default/busybox-84b6686758-h8c2g/busybox" id=350fc072-6b05-4cb0-9939-af04f5278407 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 10 22:38:30 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:30.139613903Z" level=info msg="Starting container: d9f7ccd2e3c146a048cd63b5489647c64c388872e7897975fa98bd88eb3ab30a" id=bf5d71c4-a103-4a55-a07d-da1ae409982f name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 10 22:38:30 multinode-20210810223625-345780 crio[372]: time="2021-08-10 22:38:30.148897555Z" level=info msg="Started container d9f7ccd2e3c146a048cd63b5489647c64c388872e7897975fa98bd88eb3ab30a: default/busybox-84b6686758-h8c2g/busybox" id=bf5d71c4-a103-4a55-a07d-da1ae409982f name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID
	d9f7ccd2e3c14       docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47   4 seconds ago        Running             busybox                   0                   a12294be8d1d8
	e53303f23a1ec       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                    34 seconds ago       Running             coredns                   0                   eabc968cf0ac2
	9b02a058c698a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    About a minute ago   Running             storage-provisioner       0                   9093a3386f57c
	0ff9780929d0f       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                    About a minute ago   Running             kindnet-cni               0                   9e0a65abc1bff
	88c7b5fc4adc9       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                    About a minute ago   Running             kube-proxy                0                   bd9c74ee33153
	0fdbf17c340b0       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                    About a minute ago   Running             kube-scheduler            0                   d587f6f09892a
	c94761833e2f7       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                    About a minute ago   Running             kube-controller-manager   0                   cdf621720359c
	621651b937913       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                    About a minute ago   Running             kube-apiserver            0                   4fd1daf1bf531
	44a615e1f7aa7       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                    About a minute ago   Running             etcd                      0                   403e021e4f11b
	
	* 
	* ==> coredns [e53303f23a1ece4926226c5e8c81453235a78b0c4be104ee16bab555e3e5b721] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210810223625-345780
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210810223625-345780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=multinode-20210810223625-345780
	                    minikube.k8s.io/updated_at=2021_08_10T22_36_51_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Aug 2021 22:36:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210810223625-345780
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Aug 2021 22:38:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Aug 2021 22:37:06 +0000   Tue, 10 Aug 2021 22:36:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Aug 2021 22:37:06 +0000   Tue, 10 Aug 2021 22:36:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Aug 2021 22:37:06 +0000   Tue, 10 Aug 2021 22:36:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Aug 2021 22:37:06 +0000   Tue, 10 Aug 2021 22:37:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    multinode-20210810223625-345780
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 760e67beb8554645829f2357c8eb4ae7
	  System UUID:                741fbd22-fb4a-4488-aa55-292499496867
	  Boot ID:                    73822e98-d94c-4da2-a874-acfa9b587b30
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-h8c2g                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-558bd4d5db-brf4l                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     85s
	  kube-system                 etcd-multinode-20210810223625-345780                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 kindnet-v8dtb                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      85s
	  kube-system                 kube-apiserver-multinode-20210810223625-345780             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-multinode-20210810223625-345780    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-mjpnd                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-scheduler-multinode-20210810223625-345780             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  113s (x5 over 113s)  kubelet     Node multinode-20210810223625-345780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x4 over 113s)  kubelet     Node multinode-20210810223625-345780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x4 over 113s)  kubelet     Node multinode-20210810223625-345780 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node multinode-20210810223625-345780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node multinode-20210810223625-345780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node multinode-20210810223625-345780 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             99s                  kubelet     Node multinode-20210810223625-345780 status is now: NodeNotReady
	  Normal  NodeReady                89s                  kubelet     Node multinode-20210810223625-345780 status is now: NodeReady
	  Normal  Starting                 84s                  kube-proxy  Starting kube-proxy.
	
	
	Name:               multinode-20210810223625-345780-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210810223625-345780-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Aug 2021 22:38:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210810223625-345780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Aug 2021 22:38:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Aug 2021 22:38:26 +0000   Tue, 10 Aug 2021 22:38:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Aug 2021 22:38:26 +0000   Tue, 10 Aug 2021 22:38:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Aug 2021 22:38:26 +0000   Tue, 10 Aug 2021 22:38:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Aug 2021 22:38:26 +0000   Tue, 10 Aug 2021 22:38:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    multinode-20210810223625-345780-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 760e67beb8554645829f2357c8eb4ae7
	  System UUID:                8c273c3f-048c-4d72-a6b7-f7177fcf7df8
	  Boot ID:                    73822e98-d94c-4da2-a874-acfa9b587b30
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-crhdk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-2sblt               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      19s
	  kube-system                 kube-proxy-fmk5q            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 19s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x2 over 19s)  kubelet     Node multinode-20210810223625-345780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x2 over 19s)  kubelet     Node multinode-20210810223625-345780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x2 over 19s)  kubelet     Node multinode-20210810223625-345780-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 17s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                9s                 kubelet     Node multinode-20210810223625-345780-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug10 22:34] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 82 fe bc 50 41 6a 08 06        .........PAj..
	[  +0.000003] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 82 fe bc 50 41 6a 08 06        .........PAj..
	[ +21.297117] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 66 e9 df 71 c3 98 08 06        ......f..q....
	[ +11.991438] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth6fcd92bc
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7e 86 39 b7 dd ad 08 06        ......~.9.....
	[Aug10 22:35] cgroup: cgroup2: unknown option "nsdelegate"
	[ +25.557577] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug10 22:36] cgroup: cgroup2: unknown option "nsdelegate"
	[ +26.599620] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug10 22:37] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 56 cd 3c d0 ef e9 08 06        ......V.<.....
	[  +0.000003] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 56 cd 3c d0 ef e9 08 06        ......V.<.....
	[ +23.261797] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 5a 82 53 81 9c 79 08 06        ......Z.S..y..
	[ +14.984182] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd6d6f610
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 26 e4 a7 4f 8e 89 08 06        ......&..O....
	[Aug10 22:38] cgroup: cgroup2: unknown option "nsdelegate"
	[ +24.484127] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth2b4642ef
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff da 16 93 3d ce 2c 08 06        .........=.,..
	[  +0.063724] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth520d51a3
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff d6 b4 9b cf 8a 2f 08 06        .........../..
	
	* 
	* ==> etcd [44a615e1f7aa7e5668ea60b0be6460e90990e8bb12cb1a485307a22d890dd13d] <==
	* 2021-08-10 22:36:44.460358 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-10 22:36:44.460630 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-10 22:36:57.448157 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:59.039462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:03.427301 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.200709628s) to execute
	2021-08-10 22:37:03.427335 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (2.18855724s) to execute
	2021-08-10 22:37:03.427402 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (2.153874898s) to execute
	2021-08-10 22:37:03.427440 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-multinode-20210810223625-345780\" " with result "range_response_count:1 size:7045" took too long (2.32101081s) to execute
	2021-08-10 22:37:03.427458 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.892773859s) to execute
	2021-08-10 22:37:05.534257 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000145986s) to execute
	2021-08-10 22:37:05.958256 W | wal: sync duration of 2.516722358s, expected less than 1s
	2021-08-10 22:37:06.248326 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5972" took too long (2.803477004s) to execute
	2021-08-10 22:37:06.248417 W | etcdserver: request "header:<ID:8128006883081272320 > lease_revoke:<id:70cc7b3235fd3b4c>" with result "size:29" took too long (289.928712ms) to execute
	2021-08-10 22:37:06.250131 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (707.263019ms) to execute
	2021-08-10 22:37:06.250240 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (2.515599439s) to execute
	2021-08-10 22:37:06.250342 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-multinode-20210810223625-345780\" " with result "range_response_count:1 size:7433" took too long (2.805241323s) to execute
	2021-08-10 22:37:09.039336 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:19.040042 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:29.039944 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:39.040327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:49.040066 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:59.040488 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:38:09.039664 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:38:19.039802 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:38:29.039996 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:38:35 up  2:21,  0 users,  load average: 0.64, 1.41, 2.21
	Linux multinode-20210810223625-345780 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [621651b937913af89a717bfd2db72743b1137803e5585c3a0f30a7b2e78876f0] <==
	* Trace[464861782]: [2.322363856s] [2.322363856s] END
	I0810 22:37:03.428998       1 trace.go:205] Trace[2129521607]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/node-controller,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/kube-controller-manager,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (10-Aug-2021 22:37:01.272) (total time: 2156ms):
	Trace[2129521607]: ---"About to write a response" 2155ms (22:37:00.428)
	Trace[2129521607]: [2.1560849s] [2.1560849s] END
	I0810 22:37:06.249039       1 trace.go:205] Trace[759863350]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (10-Aug-2021 22:37:03.444) (total time: 2804ms):
	Trace[759863350]: [2.80449749s] [2.80449749s] END
	I0810 22:37:06.249208       1 trace.go:205] Trace[1900489013]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (10-Aug-2021 22:37:05.543) (total time: 705ms):
	Trace[1900489013]: ---"Object stored in database" 705ms (22:37:00.249)
	Trace[1900489013]: [705.625937ms] [705.625937ms] END
	I0810 22:37:06.249388       1 trace.go:205] Trace[544569609]: "List" url:/api/v1/nodes,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (10-Aug-2021 22:37:03.444) (total time: 2804ms):
	Trace[544569609]: ---"Listing from storage done" 2804ms (22:37:00.249)
	Trace[544569609]: [2.804855956s] [2.804855956s] END
	I0810 22:37:06.250598       1 trace.go:205] Trace[414909335]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (10-Aug-2021 22:37:03.734) (total time: 2516ms):
	Trace[414909335]: [2.516547789s] [2.516547789s] END
	I0810 22:37:06.251436       1 trace.go:205] Trace[1509180263]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210810223625-345780,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (10-Aug-2021 22:37:03.444) (total time: 2806ms):
	Trace[1509180263]: ---"About to write a response" 2806ms (22:37:00.251)
	Trace[1509180263]: [2.806852127s] [2.806852127s] END
	I0810 22:37:10.412673       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0810 22:37:10.460731       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0810 22:37:27.069221       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:37:27.069269       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:37:27.069278       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:38:05.421147       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:38:05.421214       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:38:05.421227       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [c94761833e2f77d4a2777a59a71f66782ddb1e4cba262b987eca9a619e119548] <==
	* I0810 22:37:10.430976       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0810 22:37:10.457913       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0810 22:37:10.457935       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0810 22:37:10.468356       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mjpnd"
	I0810 22:37:10.471221       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-v8dtb"
	I0810 22:37:10.482380       1 shared_informer.go:247] Caches are synced for garbage collector 
	E0810 22:37:10.484198       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"051134bb-0519-41ee-b35a-6929b5f590a9", ResourceVersion:"268", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231811, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000b3b398), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000b3b3b0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00136b1e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000b3b3c8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000b3b3e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000b3b3f8), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00136b200)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00136b240)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0018851a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c982b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00032d340), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001c96370)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001c98300)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0810 22:37:10.662614       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-wmrkg"
	I0810 22:37:10.669979       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-brf4l"
	I0810 22:37:10.683162       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-wmrkg"
	W0810 22:38:16.124689       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210810223625-345780-m02" does not exist
	I0810 22:38:16.137592       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fmk5q"
	I0810 22:38:16.137633       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2sblt"
	I0810 22:38:16.144024       1 range_allocator.go:373] Set node multinode-20210810223625-345780-m02 PodCIDR to [10.244.1.0/24]
	E0810 22:38:16.151509       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"c3a997cc-f437-4a94-8731-52c9d831f23a", ResourceVersion:"482", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231811, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00084b968), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00084b980)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00084b998), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00084b9b0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001375ac0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc002453100), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00084b9c8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00084b9e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001375b00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002663140), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000b439f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00077ed90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002385ff0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000b43a98)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	E0810 22:38:16.152560       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"051134bb-0519-41ee-b35a-6929b5f590a9", ResourceVersion:"484", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231811, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0027b7770), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0027b7788)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0027b77a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0027b77b8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00198cd60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, Creat
ionTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0027b77d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexV
olumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0027b77e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVol
umeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSI
VolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0027b7800), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v
1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00198cd80)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00198cdc0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amoun
t{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropag
ation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0027cb0e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00265db78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0006dc850), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil
), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001b86c90)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00265dbc0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition
(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	W0810 22:38:19.919799       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210810223625-345780-m02. Assuming now as a timestamp.
	I0810 22:38:19.919829       1 event.go:291] "Event occurred" object="multinode-20210810223625-345780-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210810223625-345780-m02 event: Registered Node multinode-20210810223625-345780-m02 in Controller"
	I0810 22:38:28.655364       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0810 22:38:28.667316       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-crhdk"
	I0810 22:38:28.674665       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-h8c2g"
	I0810 22:38:29.928240       1 event.go:291] "Event occurred" object="default/busybox-84b6686758-crhdk" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-84b6686758-crhdk"
	
	* 
	* ==> kube-proxy [88c7b5fc4adc9c2ae6767332919fd4629d68ae70f94b9f7375d2d89d611d0c91] <==
	* I0810 22:37:11.879852       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0810 22:37:11.879917       1 server_others.go:140] Detected node IP 192.168.49.2
	W0810 22:37:11.879944       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0810 22:37:11.907716       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0810 22:37:11.907759       1 server_others.go:212] Using iptables Proxier.
	I0810 22:37:11.907775       1 server_others.go:219] creating dualStackProxier for iptables.
	W0810 22:37:11.907792       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0810 22:37:11.908385       1 server.go:643] Version: v1.21.3
	I0810 22:37:11.909176       1 config.go:315] Starting service config controller
	I0810 22:37:11.909270       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0810 22:37:11.909194       1 config.go:224] Starting endpoint slice config controller
	I0810 22:37:11.909395       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0810 22:37:11.911121       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0810 22:37:11.912534       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0810 22:37:12.009668       1 shared_informer.go:247] Caches are synced for service config 
	I0810 22:37:12.009805       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [0fdbf17c340b00d4cc89386d2caec2eef6582759661fca3b3c6f334dcbc264b1] <==
	* I0810 22:36:48.684280       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0810 22:36:48.757629       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:36:48.757636       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:36:48.757631       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0810 22:36:48.757747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:36:48.757778       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:36:48.757820       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:36:48.757875       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:36:48.757909       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:36:48.757937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:36:48.757977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:36:48.758007       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0810 22:36:48.759626       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0810 22:36:48.759868       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:36:48.760138       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0810 22:36:49.613930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:36:49.649011       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:36:49.669756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0810 22:36:49.671685       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:36:49.732443       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:36:49.757933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0810 22:36:49.824320       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:36:49.830382       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:36:49.836436       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0810 22:36:51.584000       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-10 22:36:27 UTC, end at Tue 2021-08-10 22:38:35 UTC. --
	Aug 10 22:37:10 multinode-20210810223625-345780 kubelet[1606]: I0810 22:37:10.689082    1606 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws7qs\" (UniqueName: \"kubernetes.io/projected/827b85ba-dadb-4a2a-baa7-557371796646-kube-api-access-ws7qs\") pod \"coredns-558bd4d5db-brf4l\" (UID: \"827b85ba-dadb-4a2a-baa7-557371796646\") "
	Aug 10 22:37:10 multinode-20210810223625-345780 kubelet[1606]: I0810 22:37:10.689139    1606 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/827b85ba-dadb-4a2a-baa7-557371796646-config-volume\") pod \"coredns-558bd4d5db-brf4l\" (UID: \"827b85ba-dadb-4a2a-baa7-557371796646\") "
	Aug 10 22:37:11 multinode-20210810223625-345780 kubelet[1606]: I0810 22:37:11.459533    1606 topology_manager.go:187] "Topology Admit Handler"
	Aug 10 22:37:11 multinode-20210810223625-345780 kubelet[1606]: I0810 22:37:11.496266    1606 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/94400cf8-9fbe-457d-99d8-78eb282c11cb-tmp\") pod \"storage-provisioner\" (UID: \"94400cf8-9fbe-457d-99d8-78eb282c11cb\") "
	Aug 10 22:37:11 multinode-20210810223625-345780 kubelet[1606]: I0810 22:37:11.496326    1606 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc682\" (UniqueName: \"kubernetes.io/projected/94400cf8-9fbe-457d-99d8-78eb282c11cb-kube-api-access-jc682\") pod \"storage-provisioner\" (UID: \"94400cf8-9fbe-457d-99d8-78eb282c11cb\") "
	Aug 10 22:37:17 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:17.087472    1606 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:37:21 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:21.882368    1606 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-brf4l_kube-system_827b85ba-dadb-4a2a-baa7-557371796646_0(51136586225ca80b590fac0f8f2c4bce63faecbfe1d1d3d84a8dd9bb9711a4ee): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 10 22:37:21 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:21.882468    1606 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-brf4l_kube-system_827b85ba-dadb-4a2a-baa7-557371796646_0(51136586225ca80b590fac0f8f2c4bce63faecbfe1d1d3d84a8dd9bb9711a4ee): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-brf4l"
	Aug 10 22:37:21 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:21.882503    1606 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-brf4l_kube-system_827b85ba-dadb-4a2a-baa7-557371796646_0(51136586225ca80b590fac0f8f2c4bce63faecbfe1d1d3d84a8dd9bb9711a4ee): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-brf4l"
	Aug 10 22:37:21 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:21.882606    1606 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-brf4l_kube-system(827b85ba-dadb-4a2a-baa7-557371796646)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-brf4l_kube-system(827b85ba-dadb-4a2a-baa7-557371796646)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-brf4l_kube-system_827b85ba-dadb-4a2a-baa7-557371796646_0(51136586225ca80b590fac0f8f2c4bce63faecbfe1d1d3d84a8dd9bb9711a4ee): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-brf4l" podUID=827b85ba-dadb-4a2a-baa7-557371796646
	Aug 10 22:37:27 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:27.145570    1606 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:37:37 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:37.202497    1606 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:37:45 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:45.149696    1606 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-brf4l_kube-system_827b85ba-dadb-4a2a-baa7-557371796646_0(4838cefa84d60d27e531db902b54eef880abcc715d552a767d19574fd5641d7c): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 10 22:37:45 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:45.149779    1606 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-brf4l_kube-system_827b85ba-dadb-4a2a-baa7-557371796646_0(4838cefa84d60d27e531db902b54eef880abcc715d552a767d19574fd5641d7c): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-brf4l"
	Aug 10 22:37:45 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:45.149824    1606 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-brf4l_kube-system_827b85ba-dadb-4a2a-baa7-557371796646_0(4838cefa84d60d27e531db902b54eef880abcc715d552a767d19574fd5641d7c): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-brf4l"
	Aug 10 22:37:45 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:45.149910    1606 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-brf4l_kube-system(827b85ba-dadb-4a2a-baa7-557371796646)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-brf4l_kube-system(827b85ba-dadb-4a2a-baa7-557371796646)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-brf4l_kube-system_827b85ba-dadb-4a2a-baa7-557371796646_0(4838cefa84d60d27e531db902b54eef880abcc715d552a767d19574fd5641d7c): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-brf4l" podUID=827b85ba-dadb-4a2a-baa7-557371796646
	Aug 10 22:37:47 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:47.258184    1606 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:37:57 multinode-20210810223625-345780 kubelet[1606]: E0810 22:37:57.320195    1606 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:38:00 multinode-20210810223625-345780 kubelet[1606]: W0810 22:38:00.114804    1606 container.go:586] Failed to update stats for container "/system.slice/crio-e53303f23a1ece4926226c5e8c81453235a78b0c4be104ee16bab555e3e5b721.scope": /sys/fs/cgroup/cpuset/system.slice/crio-e53303f23a1ece4926226c5e8c81453235a78b0c4be104ee16bab555e3e5b721.scope/cpuset.mems found to be empty, continuing to push stats
	Aug 10 22:38:07 multinode-20210810223625-345780 kubelet[1606]: E0810 22:38:07.379645    1606 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa\": RecentStats: unable to find data in memory cache], [\"/system.slice/crio-e53303f23a1ece4926226c5e8c81453235a78b0c4be104ee16bab555e3e5b721.scope\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:38:10 multinode-20210810223625-345780 kubelet[1606]: W0810 22:38:10.496399    1606 container.go:586] Failed to update stats for container "/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa": /sys/fs/cgroup/cpuset/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/cpuset.cpus found to be empty, continuing to push stats
	Aug 10 22:38:17 multinode-20210810223625-345780 kubelet[1606]: E0810 22:38:17.453534    1606 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:38:27 multinode-20210810223625-345780 kubelet[1606]: E0810 22:38:27.522878    1606 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa\": RecentStats: unable to find data in memory cache]"
	Aug 10 22:38:28 multinode-20210810223625-345780 kubelet[1606]: I0810 22:38:28.681984    1606 topology_manager.go:187] "Topology Admit Handler"
	Aug 10 22:38:28 multinode-20210810223625-345780 kubelet[1606]: I0810 22:38:28.728351    1606 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpvxx\" (UniqueName: \"kubernetes.io/projected/abde6587-0fff-4c8d-a873-ef866a591221-kube-api-access-zpvxx\") pod \"busybox-84b6686758-h8c2g\" (UID: \"abde6587-0fff-4c8d-a873-ef866a591221\") "
	
	* 
	* ==> storage-provisioner [9b02a058c698abf9486ea9a28711d78b840cd68fc379863d9cb51cb19e64afc8] <==
	* I0810 22:37:12.391155       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0810 22:37:12.399448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0810 22:37:12.399502       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0810 22:37:12.407346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0810 22:37:12.407464       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d58b6f5-e5e4-4c35-94af-0b976e4025de", APIVersion:"v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210810223625-345780_1692c549-1a9e-49e7-98cf-3c6d9ec2f156 became leader
	I0810 22:37:12.407487       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210810223625-345780_1692c549-1a9e-49e7-98cf-3c6d9ec2f156!
	I0810 22:37:12.508160       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210810223625-345780_1692c549-1a9e-49e7-98cf-3c6d9ec2f156!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210810223625-345780 -n multinode-20210810223625-345780
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210810223625-345780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context multinode-20210810223625-345780 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context multinode-20210810223625-345780 describe pod : exit status 1 (51.300239ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context multinode-20210810223625-345780 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.69s)

                                                
                                    
x
+
TestPreload (147.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210810224612-345780 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.0
E0810 22:47:13.665182  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:47:59.310760  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210810224612-345780 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.0: (1m48.869164763s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210810224612-345780 -- sudo crictl pull busybox
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210810224612-345780 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210810224612-345780 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.3: (31.460148066s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210810224612-345780 -- sudo crictl image ls
preload_test.go:85: Expected to find busybox in output of `docker images`, instead got 
-- stdout --
	IMAGE               TAG                 IMAGE ID            SIZE

                                                
                                                
-- /stdout --
panic.go:613: *** TestPreload FAILED at 2021-08-10 22:48:34.14684268 +0000 UTC m=+1743.975326509
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect test-preload-20210810224612-345780
helpers_test.go:236: (dbg) docker inspect test-preload-20210810224612-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9cb4affd09967622cd7f60a90ac8e5f9600bfe9dd25a14dd1b0dc03f817bc6e",
	        "Created": "2021-08-10T22:46:14.88346959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 469932,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-10T22:46:15.662567302Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/e9cb4affd09967622cd7f60a90ac8e5f9600bfe9dd25a14dd1b0dc03f817bc6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9cb4affd09967622cd7f60a90ac8e5f9600bfe9dd25a14dd1b0dc03f817bc6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9cb4affd09967622cd7f60a90ac8e5f9600bfe9dd25a14dd1b0dc03f817bc6e/hosts",
	        "LogPath": "/var/lib/docker/containers/e9cb4affd09967622cd7f60a90ac8e5f9600bfe9dd25a14dd1b0dc03f817bc6e/e9cb4affd09967622cd7f60a90ac8e5f9600bfe9dd25a14dd1b0dc03f817bc6e-json.log",
	        "Name": "/test-preload-20210810224612-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20210810224612-345780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20210810224612-345780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1af06fb292da6e0d7b38ef0d225ac86c15c0ae91b412af7ff98b200e82e7003e-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1af06fb292da6e0d7b38ef0d225ac86c15c0ae91b412af7ff98b200e82e7003e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1af06fb292da6e0d7b38ef0d225ac86c15c0ae91b412af7ff98b200e82e7003e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1af06fb292da6e0d7b38ef0d225ac86c15c0ae91b412af7ff98b200e82e7003e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "test-preload-20210810224612-345780",
	                "Source": "/var/lib/docker/volumes/test-preload-20210810224612-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20210810224612-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20210810224612-345780",
	                "name.minikube.sigs.k8s.io": "test-preload-20210810224612-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ee77329678d552fbd4bc92e21f8b489dd9dd145ac456e37b93d3f153e7bf539",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ee77329678d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20210810224612-345780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e9cb4affd099"
	                    ],
	                    "NetworkID": "fdf41e01d3ea91bccd0847507913409244a3df27a5328b5155bcca078750800f",
	                    "EndpointID": "798be5cd99460bc64fa00275307bc1006a666ac3aef0857f081c7a3c3f719a65",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-20210810224612-345780 -n test-preload-20210810224612-345780
helpers_test.go:245: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-20210810224612-345780 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p test-preload-20210810224612-345780 logs -n 25: (1.136729542s)
helpers_test.go:253: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                             Args                             |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| kubectl | -p                                                           | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:38:33 UTC | Tue, 10 Aug 2021 22:38:33 UTC |
	|         | multinode-20210810223625-345780                              |                                     |         |         |                               |                               |
	|         | -- exec                                                      |                                     |         |         |                               |                               |
	|         | busybox-84b6686758-h8c2g                                     |                                     |         |         |                               |                               |
	|         | -- sh -c nslookup                                            |                                     |         |         |                               |                               |
	|         | host.minikube.internal | awk                                 |                                     |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                      |                                     |         |         |                               |                               |
	| -p      | multinode-20210810223625-345780                              | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:38:34 UTC | Tue, 10 Aug 2021 22:38:35 UTC |
	|         | logs -n 25                                                   |                                     |         |         |                               |                               |
	| node    | add -p                                                       | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:38:36 UTC | Tue, 10 Aug 2021 22:39:05 UTC |
	|         | multinode-20210810223625-345780                              |                                     |         |         |                               |                               |
	|         | -v 3 --alsologtostderr                                       |                                     |         |         |                               |                               |
	| profile | list --output json                                           | minikube                            | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:06 UTC | Tue, 10 Aug 2021 22:39:06 UTC |
	| -p      | multinode-20210810223625-345780                              | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:07 UTC | Tue, 10 Aug 2021 22:39:07 UTC |
	|         | cp testdata/cp-test.txt                                      |                                     |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                     |                                     |         |         |                               |                               |
	| -p      | multinode-20210810223625-345780                              | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:07 UTC | Tue, 10 Aug 2021 22:39:07 UTC |
	|         | ssh sudo cat                                                 |                                     |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                     |                                     |         |         |                               |                               |
	| -p      | multinode-20210810223625-345780 cp testdata/cp-test.txt      | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:07 UTC | Tue, 10 Aug 2021 22:39:08 UTC |
	|         | multinode-20210810223625-345780-m02:/home/docker/cp-test.txt |                                     |         |         |                               |                               |
	| -p      | multinode-20210810223625-345780                              | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:08 UTC | Tue, 10 Aug 2021 22:39:08 UTC |
	|         | ssh -n                                                       |                                     |         |         |                               |                               |
	|         | multinode-20210810223625-345780-m02                          |                                     |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                            |                                     |         |         |                               |                               |
	| -p      | multinode-20210810223625-345780 cp testdata/cp-test.txt      | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:08 UTC | Tue, 10 Aug 2021 22:39:08 UTC |
	|         | multinode-20210810223625-345780-m03:/home/docker/cp-test.txt |                                     |         |         |                               |                               |
	| -p      | multinode-20210810223625-345780                              | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:08 UTC | Tue, 10 Aug 2021 22:39:09 UTC |
	|         | ssh -n                                                       |                                     |         |         |                               |                               |
	|         | multinode-20210810223625-345780-m03                          |                                     |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                            |                                     |         |         |                               |                               |
	| -p      | multinode-20210810223625-345780                              | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:09 UTC | Tue, 10 Aug 2021 22:39:10 UTC |
	|         | node stop m03                                                |                                     |         |         |                               |                               |
	| -p      | multinode-20210810223625-345780                              | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:11 UTC | Tue, 10 Aug 2021 22:39:41 UTC |
	|         | node start m03                                               |                                     |         |         |                               |                               |
	|         | --alsologtostderr                                            |                                     |         |         |                               |                               |
	| stop    | -p                                                           | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:42 UTC | Tue, 10 Aug 2021 22:40:25 UTC |
	|         | multinode-20210810223625-345780                              |                                     |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:40:25 UTC | Tue, 10 Aug 2021 22:41:59 UTC |
	|         | multinode-20210810223625-345780                              |                                     |         |         |                               |                               |
	|         | --wait=true -v=8                                             |                                     |         |         |                               |                               |
	|         | --alsologtostderr                                            |                                     |         |         |                               |                               |
	| -p      | multinode-20210810223625-345780                              | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:42:00 UTC | Tue, 10 Aug 2021 22:42:04 UTC |
	|         | node delete m03                                              |                                     |         |         |                               |                               |
	| -p      | multinode-20210810223625-345780                              | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:42:05 UTC | Tue, 10 Aug 2021 22:42:46 UTC |
	|         | stop                                                         |                                     |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:42:46 UTC | Tue, 10 Aug 2021 22:43:56 UTC |
	|         | multinode-20210810223625-345780                              |                                     |         |         |                               |                               |
	|         | --wait=true -v=8                                             |                                     |         |         |                               |                               |
	|         | --alsologtostderr                                            |                                     |         |         |                               |                               |
	|         | --driver=docker                                              |                                     |         |         |                               |                               |
	|         | --container-runtime=crio                                     |                                     |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210810223625-345780-m03 | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:43:57 UTC | Tue, 10 Aug 2021 22:44:24 UTC |
	|         | multinode-20210810223625-345780-m03                          |                                     |         |         |                               |                               |
	|         | --driver=docker                                              |                                     |         |         |                               |                               |
	|         | --container-runtime=crio                                     |                                     |         |         |                               |                               |
	| delete  | -p                                                           | multinode-20210810223625-345780-m03 | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:44:25 UTC | Tue, 10 Aug 2021 22:44:28 UTC |
	|         | multinode-20210810223625-345780-m03                          |                                     |         |         |                               |                               |
	| -p      | multinode-20210810223625-345780                              | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:44:28 UTC | Tue, 10 Aug 2021 22:44:29 UTC |
	|         | logs -n 25                                                   |                                     |         |         |                               |                               |
	| delete  | -p                                                           | multinode-20210810223625-345780     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:44:29 UTC | Tue, 10 Aug 2021 22:44:34 UTC |
	|         | multinode-20210810223625-345780                              |                                     |         |         |                               |                               |
	| start   | -p                                                           | test-preload-20210810224612-345780  | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:46:12 UTC | Tue, 10 Aug 2021 22:48:01 UTC |
	|         | test-preload-20210810224612-345780                           |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                              |                                     |         |         |                               |                               |
	|         | --wait=true --preload=false                                  |                                     |         |         |                               |                               |
	|         | --driver=docker                                              |                                     |         |         |                               |                               |
	|         | --container-runtime=crio                                     |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0                                 |                                     |         |         |                               |                               |
	| ssh     | -p                                                           | test-preload-20210810224612-345780  | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:48:01 UTC | Tue, 10 Aug 2021 22:48:02 UTC |
	|         | test-preload-20210810224612-345780                           |                                     |         |         |                               |                               |
	|         | -- sudo crictl pull busybox                                  |                                     |         |         |                               |                               |
	| start   | -p                                                           | test-preload-20210810224612-345780  | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:48:02 UTC | Tue, 10 Aug 2021 22:48:33 UTC |
	|         | test-preload-20210810224612-345780                           |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                              |                                     |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker                             |                                     |         |         |                               |                               |
	|         |  --container-runtime=crio                                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3                                 |                                     |         |         |                               |                               |
	| ssh     | -p                                                           | test-preload-20210810224612-345780  | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:48:33 UTC | Tue, 10 Aug 2021 22:48:34 UTC |
	|         | test-preload-20210810224612-345780                           |                                     |         |         |                               |                               |
	|         | -- sudo crictl image ls                                      |                                     |         |         |                               |                               |
	|---------|--------------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:48:02
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:48:02.439987  474799 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:48:02.440070  474799 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:48:02.440090  474799 out.go:311] Setting ErrFile to fd 2...
	I0810 22:48:02.440095  474799 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:48:02.440219  474799 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:48:02.440500  474799 out.go:305] Setting JSON to false
	I0810 22:48:02.478915  474799 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":9044,"bootTime":1628626639,"procs":205,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:48:02.479054  474799 start.go:121] virtualization: kvm guest
	I0810 22:48:02.481883  474799 out.go:177] * [test-preload-20210810224612-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:48:02.483664  474799 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:48:02.482061  474799 notify.go:169] Checking for updates...
	I0810 22:48:02.485460  474799 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:48:02.487035  474799 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:48:02.488638  474799 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:48:02.491549  474799 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0810 22:48:02.491607  474799 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:48:02.543234  474799 docker.go:132] docker version: linux-19.03.15
	I0810 22:48:02.543351  474799 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:48:02.627613  474799 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-10 22:48:02.579526238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:48:02.627730  474799 docker.go:244] overlay module found
	I0810 22:48:02.629934  474799 out.go:177] * Using the docker driver based on existing profile
	I0810 22:48:02.629962  474799 start.go:278] selected driver: docker
	I0810 22:48:02.629969  474799 start.go:751] validating driver "docker" against &{Name:test-preload-20210810224612-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20210810224612-345780 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:48:02.630080  474799 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0810 22:48:02.630117  474799 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:48:02.630137  474799 out.go:242] ! Your cgroup does not allow setting memory.
	I0810 22:48:02.631584  474799 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:48:02.632446  474799 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:48:02.713337  474799 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-10 22:48:02.668107395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0810 22:48:02.713472  474799 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:48:02.713503  474799 out.go:242] ! Your cgroup does not allow setting memory.
	I0810 22:48:02.715738  474799 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:48:02.715843  474799 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0810 22:48:02.715868  474799 cni.go:93] Creating CNI manager for ""
	I0810 22:48:02.715877  474799 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:48:02.715890  474799 start_flags.go:277] config:
	{Name:test-preload-20210810224612-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210810224612-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:48:02.718019  474799 out.go:177] * Starting control plane node test-preload-20210810224612-345780 in cluster test-preload-20210810224612-345780
	I0810 22:48:02.718066  474799 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:48:02.719595  474799 out.go:177] * Pulling base image ...
	I0810 22:48:02.719628  474799 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0810 22:48:02.719732  474799 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	W0810 22:48:02.781835  474799 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.17.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0810 22:48:02.782066  474799 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780/config.json ...
	I0810 22:48:02.782149  474799 cache.go:108] acquiring lock: {Name:mk2992684e28e28c0a4befdb8ebb26ca589cb57f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:48:02.782173  474799 cache.go:108] acquiring lock: {Name:mk23f17a20ce51945a637913127361c58feadbb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:48:02.782227  474799 cache.go:108] acquiring lock: {Name:mk90848475cc14d6161f4a571efd08a7bf25861d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:48:02.782265  474799 cache.go:108] acquiring lock: {Name:mkce02a97af6b37df75397d24b6351aa6b2b00f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:48:02.782288  474799 cache.go:108] acquiring lock: {Name:mk5c9c0a42eadd000f2f20e281cee33d7cf38fb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:48:02.782173  474799 cache.go:108] acquiring lock: {Name:mk06ff21464a721667096dff5d67c2caea6f6747 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:48:02.782298  474799 cache.go:108] acquiring lock: {Name:mkbdfa3defe6d3385cdc7fd98eb8ed8245d220a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:48:02.782357  474799 cache.go:108] acquiring lock: {Name:mk670e6c6f08d6a3e2c14e8c6a293245dff14161 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:48:02.782391  474799 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0810 22:48:02.782403  474799 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0810 22:48:02.782393  474799 cache.go:108] acquiring lock: {Name:mk424aee259face7c113807a02e8507dd3f19426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:48:02.782414  474799 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 280.835µs
	I0810 22:48:02.782424  474799 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 267.498µs
	I0810 22:48:02.782439  474799 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0810 22:48:02.782441  474799 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0810 22:48:02.782465  474799 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0810 22:48:02.782484  474799 cache.go:97] cache image "k8s.gcr.io/pause:3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 262.317µs
	I0810 22:48:02.782496  474799 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0810 22:48:02.782505  474799 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 exists
	I0810 22:48:02.782515  474799 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0810 22:48:02.782528  474799 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" took 275.561µs
	I0810 22:48:02.782533  474799 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0810 22:48:02.782532  474799 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0810 22:48:02.782554  474799 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0810 22:48:02.782571  474799 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 183.456µs
	I0810 22:48:02.782581  474799 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0810 22:48:02.782512  474799 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 260.206µs
	I0810 22:48:02.782623  474799 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0810 22:48:02.782540  474799 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 succeeded
	I0810 22:48:02.782496  474799 cache.go:81] save to tar file k8s.gcr.io/pause:3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0810 22:48:02.782808  474799 cache.go:108] acquiring lock: {Name:mk6d58f756a10ed599ec204afb396cf445c91a57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:48:02.783113  474799 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0810 22:48:02.783513  474799 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:48:02.783516  474799 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:48:02.783535  474799 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:48:02.783823  474799 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:48:02.814413  474799 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:48:02.814465  474799 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:48:02.814490  474799 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:48:02.814541  474799 start.go:313] acquiring machines lock for test-preload-20210810224612-345780: {Name:mke1ea155a29a9122e04d5a5ff478b5687ef3575 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:48:02.814663  474799 start.go:317] acquired machines lock for "test-preload-20210810224612-345780" in 83.579µs
	I0810 22:48:02.814688  474799 start.go:93] Skipping create...Using existing machine configuration
	I0810 22:48:02.814698  474799 fix.go:55] fixHost starting: 
	I0810 22:48:02.814975  474799 cli_runner.go:115] Run: docker container inspect test-preload-20210810224612-345780 --format={{.State.Status}}
	I0810 22:48:02.856054  474799 fix.go:108] recreateIfNeeded on test-preload-20210810224612-345780: state=Running err=<nil>
	W0810 22:48:02.856088  474799 fix.go:134] unexpected machine state, will restart: <nil>
	I0810 22:48:02.859281  474799 out.go:177] * Updating the running docker "test-preload-20210810224612-345780" container ...
	I0810 22:48:02.859322  474799 machine.go:88] provisioning docker machine ...
	I0810 22:48:02.859345  474799 ubuntu.go:169] provisioning hostname "test-preload-20210810224612-345780"
	I0810 22:48:02.859412  474799 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210810224612-345780
	I0810 22:48:02.902330  474799 main.go:130] libmachine: Using SSH client type: native
	I0810 22:48:02.902544  474799 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I0810 22:48:02.902566  474799 main.go:130] libmachine: About to run SSH command:
	sudo hostname test-preload-20210810224612-345780 && echo "test-preload-20210810224612-345780" | sudo tee /etc/hostname
	I0810 22:48:03.025469  474799 main.go:130] libmachine: SSH cmd err, output: <nil>: test-preload-20210810224612-345780
	
	I0810 22:48:03.025550  474799 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210810224612-345780
	I0810 22:48:03.045528  474799 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0810 22:48:03.046545  474799 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0810 22:48:03.049439  474799 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0810 22:48:03.050176  474799 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0810 22:48:03.068623  474799 main.go:130] libmachine: Using SSH client type: native
	I0810 22:48:03.068820  474799 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I0810 22:48:03.068842  474799 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20210810224612-345780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20210810224612-345780/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20210810224612-345780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:48:03.181725  474799 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:48:03.181761  474799 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:48:03.181837  474799 ubuntu.go:177] setting up certificates
	I0810 22:48:03.181853  474799 provision.go:83] configureAuth start
	I0810 22:48:03.181938  474799 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20210810224612-345780
	I0810 22:48:03.231054  474799 provision.go:137] copyHostCerts
	I0810 22:48:03.231127  474799 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:48:03.231142  474799 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:48:03.231212  474799 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:48:03.231346  474799 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:48:03.231362  474799 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:48:03.231395  474799 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:48:03.231464  474799 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:48:03.231476  474799 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:48:03.231503  474799 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:48:03.231561  474799 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.test-preload-20210810224612-345780 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20210810224612-345780]
	I0810 22:48:03.414447  474799 provision.go:171] copyRemoteCerts
	I0810 22:48:03.414512  474799 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:48:03.414560  474799 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210810224612-345780
	I0810 22:48:03.463292  474799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224612-345780/id_rsa Username:docker}
	I0810 22:48:03.548438  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:48:03.565915  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0810 22:48:03.583009  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0810 22:48:03.601166  474799 provision.go:86] duration metric: configureAuth took 419.291949ms
	I0810 22:48:03.601201  474799 ubuntu.go:193] setting minikube options for container-runtime
	I0810 22:48:03.601490  474799 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210810224612-345780
	I0810 22:48:03.645092  474799 main.go:130] libmachine: Using SSH client type: native
	I0810 22:48:03.645322  474799 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33097 <nil> <nil>}
	I0810 22:48:03.645358  474799 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:48:03.743958  474799 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 exists
	I0810 22:48:03.744011  474799 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3" took 961.849502ms
	I0810 22:48:03.744026  474799 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 succeeded
	I0810 22:48:04.025034  474799 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 exists
	I0810 22:48:04.025101  474799 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3" took 1.242894136s
	I0810 22:48:04.025134  474799 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 succeeded
	I0810 22:48:04.058142  474799 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 exists
	I0810 22:48:04.058197  474799 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3" took 1.275488936s
	I0810 22:48:04.058212  474799 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 succeeded
	I0810 22:48:04.224208  474799 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 exists
	I0810 22:48:04.224262  474799 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3" took 1.441907703s
	I0810 22:48:04.224295  474799 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 succeeded
	I0810 22:48:04.224320  474799 cache.go:88] Successfully saved all images to host disk.
	I0810 22:48:04.295948  474799 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:48:04.295980  474799 machine.go:91] provisioned docker machine in 1.436649902s
	I0810 22:48:04.295992  474799 start.go:267] post-start starting for "test-preload-20210810224612-345780" (driver="docker")
	I0810 22:48:04.296000  474799 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:48:04.296082  474799 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:48:04.296132  474799 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210810224612-345780
	I0810 22:48:04.335601  474799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224612-345780/id_rsa Username:docker}
	I0810 22:48:04.425110  474799 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:48:04.428121  474799 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0810 22:48:04.428150  474799 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0810 22:48:04.428164  474799 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0810 22:48:04.428172  474799 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0810 22:48:04.428187  474799 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:48:04.428252  474799 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:48:04.428367  474799 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> 3457802.pem in /etc/ssl/certs
	I0810 22:48:04.428495  474799 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:48:04.435890  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:48:04.454283  474799 start.go:270] post-start completed in 158.273356ms
	I0810 22:48:04.454360  474799 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:48:04.454407  474799 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210810224612-345780
	I0810 22:48:04.495221  474799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224612-345780/id_rsa Username:docker}
	I0810 22:48:04.577823  474799 fix.go:57] fixHost completed within 1.763117057s
	I0810 22:48:04.577854  474799 start.go:80] releasing machines lock for "test-preload-20210810224612-345780", held for 1.763178813s
	I0810 22:48:04.577950  474799 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20210810224612-345780
	I0810 22:48:04.619113  474799 ssh_runner.go:149] Run: systemctl --version
	I0810 22:48:04.619169  474799 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210810224612-345780
	I0810 22:48:04.619174  474799 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:48:04.619275  474799 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210810224612-345780
	I0810 22:48:04.661985  474799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224612-345780/id_rsa Username:docker}
	I0810 22:48:04.663722  474799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224612-345780/id_rsa Username:docker}
	I0810 22:48:04.745196  474799 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:48:04.777090  474799 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:48:04.787467  474799 docker.go:153] disabling docker service ...
	I0810 22:48:04.787531  474799 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:48:04.797363  474799 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:48:04.807076  474799 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:48:04.922236  474799 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:48:05.030982  474799 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:48:05.040678  474799 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:48:05.053470  474799 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.1"|' -i /etc/crio/crio.conf"
	I0810 22:48:05.061491  474799 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0810 22:48:05.061533  474799 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0810 22:48:05.069628  474799 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:48:05.076066  474799 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:48:05.076125  474799 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:48:05.083269  474799 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:48:05.089692  474799 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:48:05.198483  474799 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:48:05.208206  474799 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:48:05.208277  474799 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:48:05.211624  474799 start.go:417] Will wait 60s for crictl version
	I0810 22:48:05.211688  474799 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:48:05.240519  474799 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0810 22:48:05.240602  474799 ssh_runner.go:149] Run: crio --version
	I0810 22:48:05.304674  474799 ssh_runner.go:149] Run: crio --version
	I0810 22:48:05.369620  474799 out.go:177] * Preparing Kubernetes v1.17.3 on CRI-O 1.20.3 ...
	I0810 22:48:05.369713  474799 cli_runner.go:115] Run: docker network inspect test-preload-20210810224612-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 22:48:05.408869  474799 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0810 22:48:05.412501  474799 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0810 22:48:05.412557  474799 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:48:05.441322  474799 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.17.3". assuming images are not preloaded.
	I0810 22:48:05.441352  474799 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.3 k8s.gcr.io/kube-controller-manager:v1.17.3 k8s.gcr.io/kube-scheduler:v1.17.3 k8s.gcr.io/kube-proxy:v1.17.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0810 22:48:05.441443  474799 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0810 22:48:05.441465  474799 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0810 22:48:05.441466  474799 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0810 22:48:05.441493  474799 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0810 22:48:05.441518  474799 image.go:133] retrieving image: k8s.gcr.io/pause:3.1
	I0810 22:48:05.441531  474799 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0810 22:48:05.441443  474799 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0810 22:48:05.441675  474799 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0810 22:48:05.441499  474799 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0810 22:48:05.441683  474799 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:48:05.445122  474799 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:48:05.445161  474799 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:48:05.445172  474799 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:48:05.445129  474799 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:48:05.457334  474799 image.go:171] found k8s.gcr.io/pause:3.1 locally: &{Image:0xc000618480}
	I0810 22:48:05.457439  474799 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0810 22:48:05.680987  474799 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.3
	I0810 22:48:05.681870  474799 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.3
	I0810 22:48:05.682244  474799 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.3
	I0810 22:48:05.682949  474799 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.3
	I0810 22:48:05.858313  474799 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc000a74060}
	I0810 22:48:05.858438  474799 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:48:05.869213  474799 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.17.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.3" does not exist at hash "ae853e93800dc2572aeb425e5765cf9b25212bfc43695299e61dece06cffa4a1" in container runtime
	I0810 22:48:05.869277  474799 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.17.3
	I0810 22:48:05.869322  474799 ssh_runner.go:149] Run: which crictl
	I0810 22:48:05.869420  474799 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.17.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.3" does not exist at hash "90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b" in container runtime
	I0810 22:48:05.869493  474799 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0810 22:48:05.869580  474799 ssh_runner.go:149] Run: which crictl
	I0810 22:48:05.869472  474799 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.17.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.3" does not exist at hash "b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302" in container runtime
	I0810 22:48:05.869759  474799 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0810 22:48:05.869812  474799 ssh_runner.go:149] Run: which crictl
	I0810 22:48:05.873545  474799 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.17.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.3" does not exist at hash "d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad" in container runtime
	I0810 22:48:05.873590  474799 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0810 22:48:05.873627  474799 ssh_runner.go:149] Run: which crictl
	I0810 22:48:05.959511  474799 image.go:171] found k8s.gcr.io/coredns:1.6.5 locally: &{Image:0xc000a741a0}
	I0810 22:48:05.959631  474799 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0810 22:48:05.981034  474799 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc000a745e0}
	I0810 22:48:05.981137  474799 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0810 22:48:06.020087  474799 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.17.3
	I0810 22:48:06.020182  474799 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.17.3
	I0810 22:48:06.020202  474799 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.17.3
	I0810 22:48:06.020201  474799 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.17.3
	I0810 22:48:06.131757  474799 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0810 22:48:06.131809  474799 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0810 22:48:06.131859  474799 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3
	I0810 22:48:06.131884  474799 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0810 22:48:06.131923  474799 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0810 22:48:06.131956  474799 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0810 22:48:06.132014  474799 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0810 22:48:06.132017  474799 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0810 22:48:06.136311  474799 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.3': No such file or directory
	I0810 22:48:06.136338  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 --> /var/lib/minikube/images/kube-apiserver_v1.17.3 (50635776 bytes)
	I0810 22:48:06.136382  474799 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.3': No such file or directory
	I0810 22:48:06.136410  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 --> /var/lib/minikube/images/kube-proxy_v1.17.3 (48706048 bytes)
	I0810 22:48:06.136458  474799 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.3': No such file or directory
	I0810 22:48:06.136485  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 --> /var/lib/minikube/images/kube-scheduler_v1.17.3 (33822208 bytes)
	I0810 22:48:06.136511  474799 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.3': No such file or directory
	I0810 22:48:06.136531  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 --> /var/lib/minikube/images/kube-controller-manager_v1.17.3 (48810496 bytes)
	I0810 22:48:06.444083  474799 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0810 22:48:06.444161  474799 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0810 22:48:07.955714  474799 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000116220}
	I0810 22:48:07.955838  474799 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0810 22:48:08.337238  474799 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3: (1.893040038s)
	I0810 22:48:08.337273  474799 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 from cache
	I0810 22:48:08.337303  474799 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0810 22:48:08.337352  474799 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0810 22:48:08.581347  474799 image.go:171] found k8s.gcr.io/etcd:3.4.3-0 locally: &{Image:0xc000a740a0}
	I0810 22:48:08.581469  474799 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0810 22:48:11.599133  474799 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3: (3.261755108s)
	I0810 22:48:11.599160  474799 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 from cache
	I0810 22:48:11.599185  474799 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0810 22:48:11.599222  474799 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0810 22:48:11.599230  474799 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0: (3.017730576s)
	I0810 22:48:14.650766  474799 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3: (3.051511919s)
	I0810 22:48:14.650802  474799 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 from cache
	I0810 22:48:14.650826  474799 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.3
	I0810 22:48:14.650869  474799 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3
	I0810 22:48:16.399180  474799 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3: (1.748285871s)
	I0810 22:48:16.399210  474799 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 from cache
	I0810 22:48:16.399233  474799 cache_images.go:113] Successfully loaded all cached images
	I0810 22:48:16.399239  474799 cache_images.go:82] LoadImages completed in 10.957870865s
	I0810 22:48:16.399312  474799 ssh_runner.go:149] Run: crio config
	I0810 22:48:16.468309  474799 cni.go:93] Creating CNI manager for ""
	I0810 22:48:16.468334  474799 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:48:16.468346  474799 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:48:16.468362  474799 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.17.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20210810224612-345780 NodeName:test-preload-20210810224612-345780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:48:16.468533  474799 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "test-preload-20210810224612-345780"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:48:16.468625  474799 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=test-preload-20210810224612-345780 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210810224612-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:48:16.468691  474799 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.17.3
	I0810 22:48:16.476292  474799 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.3': No such file or directory
	
	Initiating transfer...
	I0810 22:48:16.476359  474799 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.3
	I0810 22:48:16.484201  474799 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubeadm
	I0810 22:48:16.484235  474799 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubelet
	I0810 22:48:16.484260  474799 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubectl
	I0810 22:48:17.287158  474799 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm
	I0810 22:48:17.290924  474799 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubeadm': No such file or directory
	I0810 22:48:17.290961  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubeadm --> /var/lib/minikube/binaries/v1.17.3/kubeadm (39346176 bytes)
	I0810 22:48:17.463123  474799 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:48:17.473324  474799 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0810 22:48:17.486878  474799 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet
	I0810 22:48:17.489914  474799 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubelet': No such file or directory
	I0810 22:48:17.489939  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubelet --> /var/lib/minikube/binaries/v1.17.3/kubelet (111584792 bytes)
	I0810 22:48:18.062685  474799 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl
	I0810 22:48:18.066701  474799 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubectl': No such file or directory
	I0810 22:48:18.066741  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubectl --> /var/lib/minikube/binaries/v1.17.3/kubectl (43499520 bytes)
	I0810 22:48:18.153334  474799 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0810 22:48:18.160500  474799 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (565 bytes)
	I0810 22:48:18.173393  474799 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:48:18.186267  474799 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0810 22:48:18.198957  474799 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0810 22:48:18.202141  474799 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780 for IP: 192.168.49.2
	I0810 22:48:18.202198  474799 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:48:18.202225  474799 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:48:18.202291  474799 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780/client.key
	I0810 22:48:18.202316  474799 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780/apiserver.key.dd3b5fb2
	I0810 22:48:18.202342  474799 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780/proxy-client.key
	I0810 22:48:18.202452  474799 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem (1338 bytes)
	W0810 22:48:18.202504  474799 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780_empty.pem, impossibly tiny 0 bytes
	I0810 22:48:18.202520  474799 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0810 22:48:18.202567  474799 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:48:18.202606  474799 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:48:18.202659  474799 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:48:18.202728  474799 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:48:18.203854  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0810 22:48:18.221427  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0810 22:48:18.239691  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0810 22:48:18.256809  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0810 22:48:18.274023  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:48:18.291396  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:48:18.308946  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:48:18.326141  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0810 22:48:18.343358  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem --> /usr/share/ca-certificates/345780.pem (1338 bytes)
	I0810 22:48:18.360132  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /usr/share/ca-certificates/3457802.pem (1708 bytes)
	I0810 22:48:18.376509  474799 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:48:18.393276  474799 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0810 22:48:18.405591  474799 ssh_runner.go:149] Run: openssl version
	I0810 22:48:18.410443  474799 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:48:18.418437  474799 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:48:18.421726  474799 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:48:18.421777  474799 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:48:18.426477  474799 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:48:18.433081  474799 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/345780.pem && ln -fs /usr/share/ca-certificates/345780.pem /etc/ssl/certs/345780.pem"
	I0810 22:48:18.440313  474799 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/345780.pem
	I0810 22:48:18.443350  474799 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:29 /usr/share/ca-certificates/345780.pem
	I0810 22:48:18.443422  474799 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/345780.pem
	I0810 22:48:18.448366  474799 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/345780.pem /etc/ssl/certs/51391683.0"
	I0810 22:48:18.455444  474799 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3457802.pem && ln -fs /usr/share/ca-certificates/3457802.pem /etc/ssl/certs/3457802.pem"
	I0810 22:48:18.463687  474799 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3457802.pem
	I0810 22:48:18.467097  474799 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:29 /usr/share/ca-certificates/3457802.pem
	I0810 22:48:18.467180  474799 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3457802.pem
	I0810 22:48:18.472257  474799 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3457802.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 22:48:18.479130  474799 kubeadm.go:390] StartCluster: {Name:test-preload-20210810224612-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210810224612-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:48:18.479234  474799 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0810 22:48:18.479330  474799 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:48:18.504781  474799 cri.go:76] found id: "638b4c3ba6d67f98714c9d4f7eefb2e94dd4932fe161be0bd7c63bb7dc1f031f"
	I0810 22:48:18.504814  474799 cri.go:76] found id: "1558479f94677371aa5e1fb562a9e7db80d66078445bad166974163be71b4204"
	I0810 22:48:18.504821  474799 cri.go:76] found id: "746c6be66098b57da4cf5dd833130d1e7ad1dce615839a2ae460c0c22c3df05b"
	I0810 22:48:18.504825  474799 cri.go:76] found id: "17e1217e992869a72af5c74cb768bff66ec39d1b359b87280c6401cd87dede67"
	I0810 22:48:18.504829  474799 cri.go:76] found id: "3af5046c32f04cc018a14ec9ed80922dc8a73ff55fad564fbbe3085d28446622"
	I0810 22:48:18.504833  474799 cri.go:76] found id: "41f54e14eb7787b2b1a6660d28aede8abcf9904bda1b31081962969bc7608b71"
	I0810 22:48:18.504840  474799 cri.go:76] found id: "a2f8509834792c5a3a501d018ff69cd005ee92ec3da88b74d6c87546916ccb3b"
	I0810 22:48:18.504846  474799 cri.go:76] found id: "4ccccb58a4b54b899dea44511c5c8c98965d448afec164c8c3a17c1a709df7b1"
	I0810 22:48:18.504852  474799 cri.go:76] found id: ""
	I0810 22:48:18.504902  474799 ssh_runner.go:149] Run: sudo runc list -f json
	I0810 22:48:18.545469  474799 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1558479f94677371aa5e1fb562a9e7db80d66078445bad166974163be71b4204","pid":4038,"status":"running","bundle":"/run/containers/storage/overlay-containers/1558479f94677371aa5e1fb562a9e7db80d66078445bad166974163be71b4204/userdata","rootfs":"/var/lib/containers/storage/overlay/ed3e8236fc708c8f042c65cd6ef1f9809e95e8b838cc0c6227ef98180cc441a5/merged","created":"2021-08-10T22:47:39.641231897Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bcb0a7d1","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bcb0a7d1\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1558479f94677371aa5e1fb562a9e7db80d66078445bad166974163be71b4204","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:47:39.496878269Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-d2rsl\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fbd95970-61b2-4490-b4e4-e228346528b8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-d2rsl_fbd95970-61b2-4490-b4e4-e228346528b8/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\
"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ed3e8236fc708c8f042c65cd6ef1f9809e95e8b838cc0c6227ef98180cc441a5/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-d2rsl_kube-system_fbd95970-61b2-4490-b4e4-e228346528b8_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bf89379660d40768ae622dd72ae2ce1f9f16d0d53c7eb9172f77e823b4b93ce7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bf89379660d40768ae622dd72ae2ce1f9f16d0d53c7eb9172f77e823b4b93ce7","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-d2rsl_kube-system_fbd95970-61b2-4490-b4e4-e228346528b8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/h
osts\",\"host_path\":\"/var/lib/kubelet/pods/fbd95970-61b2-4490-b4e4-e228346528b8/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fbd95970-61b2-4490-b4e4-e228346528b8/containers/kindnet-cni/df2dac84\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/fbd95970-61b2-4490-b4e4-e228346528b8/volumes/kubernetes.io~secret/kindnet-token-b8wzn\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-d2rsl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fbd95970-61b2-4490-b4e4-e228346528b8","kubernetes.io/config.seen":"2021-08-10T22:47:33.453149482Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"
ociVersion":"1.0.2-dev","id":"17e1217e992869a72af5c74cb768bff66ec39d1b359b87280c6401cd87dede67","pid":3647,"status":"running","bundle":"/run/containers/storage/overlay-containers/17e1217e992869a72af5c74cb768bff66ec39d1b359b87280c6401cd87dede67/userdata","rootfs":"/var/lib/containers/storage/overlay/db89d1c3e810c1a85791217bc9148a1460baf7feee46e765562fb17c4ea3fef3/merged","created":"2021-08-10T22:47:34.009382716Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"72520bf0","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"72520bf0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGrace
Period\":\"30\"}","io.kubernetes.cri-o.ContainerID":"17e1217e992869a72af5c74cb768bff66ec39d1b359b87280c6401cd87dede67","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:47:33.914501107Z","io.kubernetes.cri-o.Image":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.17.0","io.kubernetes.cri-o.ImageRef":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-w22dk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8332ab8d-3a0d-4152-ad3a-5755f3767d14\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-w22dk_8332ab8d-3a0d-4152-ad3a-5755f3767d14/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/db89d1c3e810c1a85791217bc9148a1460b
af7feee46e765562fb17c4ea3fef3/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-w22dk_kube-system_8332ab8d-3a0d-4152-ad3a-5755f3767d14_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ce07770ac078d8c9e37ec49e548ffe5c8a04b0318456b56aefb3a74a24d5edff/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ce07770ac078d8c9e37ec49e548ffe5c8a04b0318456b56aefb3a74a24d5edff","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-w22dk_kube-system_8332ab8d-3a0d-4152-ad3a-5755f3767d14_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8332ab8d-3a0d-4152-ad3a-5755f3767d14/etc-hosts\",\"readonly
\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8332ab8d-3a0d-4152-ad3a-5755f3767d14/containers/kube-proxy/ffa6c706\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/8332ab8d-3a0d-4152-ad3a-5755f3767d14/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/8332ab8d-3a0d-4152-ad3a-5755f3767d14/volumes/kubernetes.io~secret/kube-proxy-token-6m8t5\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-w22dk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8332ab8d-3a0d-4152-ad3a-5755f3767d14","kubernetes.io/config.seen":"2021-08-10T22:47:33.450603905Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersi
on":"1.0.2-dev","id":"38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589","pid":3870,"status":"running","bundle":"/run/containers/storage/overlay-containers/38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589/userdata","rootfs":"/var/lib/containers/storage/overlay/d5a4a3d6557ce15a9e4708082230ad6005e83a121adc9a6681ab4d0575c4fd0a/merged","created":"2021-08-10T22:47:36.893347294Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:47:34.781152815Z\",\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-
provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:47:36.803233233Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224612-345780","io.kubern
etes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.pod.uid\":\"9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6/38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storag
e/overlay/d5a4a3d6557ce15a9e4708082230ad6005e83a121adc9a6681ab4d0575c4fd0a/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"9db99b3b-fda3-4d7e-af
7c-9d2a73fef3c6","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-10T22:47:34.781152815Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3af5046c32f04cc018a14ec9ed80922dc8a73ff55fad564fbbe3085d28446622","pid":2790,"status":"r
unning","bundle":"/run/containers/storage/overlay-containers/3af5046c32f04cc018a14ec9ed80922dc8a73ff55fad564fbbe3085d28446622/userdata","rootfs":"/var/lib/containers/storage/overlay/199a08fd0582d83ca113082d8d56b5daca764b2864e39bb055114f75cb657dd1/merged","created":"2021-08-10T22:47:11.78529533Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ba306fec","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ba306fec\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3af5046c32f04cc018a14ec9ed80922dc8a73ff55fad564fbbe3085d28446622","io.ku
bernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:47:11.513356928Z","io.kubernetes.cri-o.Image":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-test-preload-20210810224612-345780\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7d0175c933f149b18161b71978e1f8ac\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210810224612-345780_7d0175c933f149b18161b71978e1f8ac/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/199a08fd0582d83ca113082d8d56b5daca764b2864e39bb055114f75cb657dd1/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-test-preload-20210810224612-345
780_kube-system_7d0175c933f149b18161b71978e1f8ac_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4cb9d92813b0f332450081a886475c6922cbab721c0161456517541adc0b908f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4cb9d92813b0f332450081a886475c6922cbab721c0161456517541adc0b908f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-test-preload-20210810224612-345780_kube-system_7d0175c933f149b18161b71978e1f8ac_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7d0175c933f149b18161b71978e1f8ac/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7d0175c933f149b18161b71978e1f8ac/containers/etcd/48d325a2\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"reado
nly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-test-preload-20210810224612-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7d0175c933f149b18161b71978e1f8ac","kubernetes.io/config.hash":"7d0175c933f149b18161b71978e1f8ac","kubernetes.io/config.seen":"2021-08-10T22:47:10.264049894Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"41f54e14eb7787b2b1a6660d28aede8abcf9904bda1b31081962969bc7608b71","pid":2798,"status":"running","bundle":"/run/containers/storage/overlay-containers/41f54e14eb7787b2b1a6660d28aede8abcf9904bda1b31081962969bc7608b71/userdata","rootfs":"/var/lib/containers/storage/overlay/95129a18dcc8226adc67ebbcbaf9426a6bca25678b4a727bde59adc6666490f1/merged","cr
eated":"2021-08-10T22:47:11.785177487Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ffc41559","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ffc41559\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"41f54e14eb7787b2b1a6660d28aede8abcf9904bda1b31081962969bc7608b71","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:47:11.564274863Z","io.kubernetes.cri-o.Image":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kub
e-apiserver:v1.17.0","io.kubernetes.cri-o.ImageRef":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210810224612-345780\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f8c1872d6958c845ffffb18f158fd9df\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210810224612-345780_f8c1872d6958c845ffffb18f158fd9df/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/95129a18dcc8226adc67ebbcbaf9426a6bca25678b4a727bde59adc6666490f1/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-test-preload-20210810224612-345780_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/fb488292b666bcb70b1610723c4b9019bb6ab0
10e3ef553effdd58f83fc04110/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-test-preload-20210810224612-345780_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f8c1872d6958c845ffffb18f158fd9df/containers/kube-apiserver/aa4f3eac\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f8c1872d6958c845ffffb18f158fd9df/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/e
tc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210810224612-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.hash":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.seen":"2021-08-10T22:47:10.264054885Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4cb9d92813b0f332450081a886475c6922cbab721c0161456517541adc0b908f","pid":2633,"status":"running","bundle":"/run/containers/storage/overlay-containers/4cb9d92813b0f3324
50081a886475c6922cbab721c0161456517541adc0b908f/userdata","rootfs":"/var/lib/containers/storage/overlay/41e8b037c646d8d2b00a33f15d72a879de159293b43d9735c7470d17ef6bcf0d/merged","created":"2021-08-10T22:47:11.401268972Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"7d0175c933f149b18161b71978e1f8ac\",\"kubernetes.io/config.seen\":\"2021-08-10T22:47:10.264049894Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4cb9d92813b0f332450081a886475c6922cbab721c0161456517541adc0b908f","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-test-preload-20210810224612-345780_kube-system_7d0175c933f149b18161b71978e1f8ac_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:47:11.282540813Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224612-345780","io.kubernetes.cri-o.HostNetwork":"true","
io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4cb9d92813b0f332450081a886475c6922cbab721c0161456517541adc0b908f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-test-preload-20210810224612-345780","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"7d0175c933f149b18161b71978e1f8ac\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-test-preload-20210810224612-345780\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210810224612-345780_7d0175c933f149b18161b71978e1f8ac/4cb9d92813b0f332450081a886475c6922cbab721c0161456517541adc0b908f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-test-preload-20210810224612-345780\",\"uid\":\"7d0175c933f149b18161b71978e1f8ac\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay
/41e8b037c646d8d2b00a33f15d72a879de159293b43d9735c7470d17ef6bcf0d/merged","io.kubernetes.cri-o.Name":"k8s_etcd-test-preload-20210810224612-345780_kube-system_7d0175c933f149b18161b71978e1f8ac_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4cb9d92813b0f332450081a886475c6922cbab721c0161456517541adc0b908f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4cb9d92813b0f332450081a886475c6922cbab721c0161456517541adc0b908f","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/4cb9d92813b0f332450081a886475c6922cbab721c0161456517541adc0b908f/userdata/shm","io.kubernetes.pod.name":"etcd-test-preload-20210810224612-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.
uid":"7d0175c933f149b18161b71978e1f8ac","kubernetes.io/config.hash":"7d0175c933f149b18161b71978e1f8ac","kubernetes.io/config.seen":"2021-08-10T22:47:10.264049894Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4ccccb58a4b54b899dea44511c5c8c98965d448afec164c8c3a17c1a709df7b1","pid":2776,"status":"running","bundle":"/run/containers/storage/overlay-containers/4ccccb58a4b54b899dea44511c5c8c98965d448afec164c8c3a17c1a709df7b1/userdata","rootfs":"/var/lib/containers/storage/overlay/9cfd7c3e592a89251d889a15bf8566008d2b9cc010b73586db839e0800555c14/merged","created":"2021-08-10T22:47:11.72929173Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ec604138","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolic
y":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ec604138\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4ccccb58a4b54b899dea44511c5c8c98965d448afec164c8c3a17c1a709df7b1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:47:11.500202724Z","io.kubernetes.cri-o.Image":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.17.0","io.kubernetes.cri-o.ImageRef":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210810224612-345780\",\"io.kubernetes.pod.namespace
\":\"kube-system\",\"io.kubernetes.pod.uid\":\"01e1f4e495c3311ccc20368c1e385f74\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210810224612-345780_01e1f4e495c3311ccc20368c1e385f74/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9cfd7c3e592a89251d889a15bf8566008d2b9cc010b73586db839e0800555c14/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-test-preload-20210810224612-345780_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-test-preload-20210810224612-345780_kube-system_01e1f4e49
5c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/01e1f4e495c3311ccc20368c1e385f74/containers/kube-controller-manager/2ba2e41a\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/01e1f4e495c3311ccc20368c1e385f74/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/c
erts\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210810224612-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.hash":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.seen":"2021-08-10T22:47:10.264056694Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97","pid":2637,"status":"running","bu
ndle":"/run/containers/storage/overlay-containers/4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97/userdata","rootfs":"/var/lib/containers/storage/overlay/1fdaa5aac597b13b169880e18b0c3a1ae4edf47fae1acbf24828641084dd46c3/merged","created":"2021-08-10T22:47:11.401315653Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:47:10.264056694Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"01e1f4e495c3311ccc20368c1e385f74\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-test-preload-20210810224612-345780_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:47:11.280687967Z","io.k
ubernetes.cri-o.HostName":"test-preload-20210810224612-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-test-preload-20210810224612-345780","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"01e1f4e495c3311ccc20368c1e385f74\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210810224612-345780\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210810224612-345780_01e1f4e495c3311ccc20368c1e385f74/4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-co
ntroller-manager-test-preload-20210810224612-345780\",\"uid\":\"01e1f4e495c3311ccc20368c1e385f74\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1fdaa5aac597b13b169880e18b0c3a1ae4edf47fae1acbf24828641084dd46c3/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-test-preload-20210810224612-345780_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-c
ontainers/4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210810224612-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.hash":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.seen":"2021-08-10T22:47:10.264056694Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf","pid":4287,"status":"running","bundle":"/run/containers/storage/overlay-containers/5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf/userdata","rootfs":"/var/lib/containers/storage/overlay/71925794a86aaef2ba8c1b1d501811c3ad4ea59e6b7e9b16b145e46ce5c0f40c/merged","created":"2021-08-10T22:47:57.433296055Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.containe
r.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:47:33.692289136Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth51b39a91\",\"mac\":\"8e:f6:4a:c4:7a:6b\"},{\"name\":\"eth0\",\"mac\":\"0e:70:99:43:d3:54\",\"sandbox\":\"/var/run/netns/5cfe448a-2bfc-4138-bfc6-47a9c2796609\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6955765f44-7hn52_kube-system_63f5ee2b-efcd-4770-b07a-1935a8d6a75e_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:47:57.278920823Z","io.kubernetes.cri-o.HostName":"coredns-6955765f44-7hn52","io.kubernetes.cri-o.HostNetwork":
"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-6955765f44-7hn52","io.kubernetes.cri-o.Labels":"{\"pod-template-hash\":\"6955765f44\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"63f5ee2b-efcd-4770-b07a-1935a8d6a75e\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-7hn52\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-7hn52_63f5ee2b-efcd-4770-b07a-1935a8d6a75e/5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-6955765f44-7hn52\",\"uid\":\"63f5ee2b-efcd-4770-b07a-1935a8d6a75e\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/71925794a86aaef2ba8c1b1d50
1811c3ad4ea59e6b7e9b16b145e46ce5c0f40c/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6955765f44-7hn52_kube-system_63f5ee2b-efcd-4770-b07a-1935a8d6a75e_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf/userdata/shm","io.kubernetes.pod.name":"coredns-6955765f44-7hn52","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"63f5ee2b-efcd-4770-b07a-1935a8d6a75e","k8s-app":"kube-dns","
kubernetes.io/config.seen":"2021-08-10T22:47:33.692289136Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"6955765f44"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328","pid":2640,"status":"running","bundle":"/run/containers/storage/overlay-containers/58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328/userdata","rootfs":"/var/lib/containers/storage/overlay/88c124b5204cd9bbdcee27e73c808be26c9c6978523a5d6034fd0813dbc27ce5/merged","created":"2021-08-10T22:47:11.401244441Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"kubernetes.io/config.seen\":\"2021-08-10T22:47:10.264057792Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"58b1de
c2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-test-preload-20210810224612-345780_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:47:11.284487841Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224612-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-test-preload-20210810224612-345780","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-202108
10224612-345780\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210810224612-345780_bb577061a17ad23cfbbf52e9419bf32a/58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-test-preload-20210810224612-345780\",\"uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/88c124b5204cd9bbdcee27e73c808be26c9c6978523a5d6034fd0813dbc27ce5/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-test-preload-20210810224612-345780_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328/userda
ta/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210810224612-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-10T22:47:10.264057792Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"638b4c3ba6d67f98714c9d4f7eefb2e94dd4932fe161be0bd7c63bb7dc1f031f","pid":4319,"status":"running","bundle":"/run/containers/storage/overlay-containers/638b4c3ba6d67f98714c9d4f7eefb2e94dd4932fe161be0bd7c6
3bb7dc1f031f/userdata","rootfs":"/var/lib/containers/storage/overlay/e7e32078fe4797fe76a87eda918b6dde529ab98dd76d2f8ee80b73804ac0387b/merged","created":"2021-08-10T22:47:57.597186138Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"36abab20","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"36abab20\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metri
cs\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"638b4c3ba6d67f98714c9d4f7eefb2e94dd4932fe161be0bd7c63bb7dc1f031f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:47:57.473646016Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.5","io.kubernetes.cri-o.ImageRef":"70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-7hn52\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"63f5ee
2b-efcd-4770-b07a-1935a8d6a75e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-7hn52_63f5ee2b-efcd-4770-b07a-1935a8d6a75e/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e7e32078fe4797fe76a87eda918b6dde529ab98dd76d2f8ee80b73804ac0387b/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6955765f44-7hn52_kube-system_63f5ee2b-efcd-4770-b07a-1935a8d6a75e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6955765f44-7hn52_kube-system_63f5ee2b-efcd-4770-b07a-1935a8d6a75e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.ku
bernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/63f5ee2b-efcd-4770-b07a-1935a8d6a75e/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/63f5ee2b-efcd-4770-b07a-1935a8d6a75e/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/63f5ee2b-efcd-4770-b07a-1935a8d6a75e/containers/coredns/295ab3b3\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/63f5ee2b-efcd-4770-b07a-1935a8d6a75e/volumes/kubernetes.io~secret/coredns-token-mp8hh\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-6955765f44-7hn52","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"63f5ee2b-efcd-4770-b07a-1935a8d6a75e","kubernetes.io/config.seen":"2021-08-10T22:47:33.692289136Z","kubernetes.io/config.
source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"746c6be66098b57da4cf5dd833130d1e7ad1dce615839a2ae460c0c22c3df05b","pid":3902,"status":"running","bundle":"/run/containers/storage/overlay-containers/746c6be66098b57da4cf5dd833130d1e7ad1dce615839a2ae460c0c22c3df05b/userdata","rootfs":"/var/lib/containers/storage/overlay/a5e2d316737ba53dce808c04d965aa39fe881bc413daaf0421c12d9b4a6aad4a/merged","created":"2021-08-10T22:47:37.141189806Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bce740f0","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bce740f0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.c
ontainer.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"746c6be66098b57da4cf5dd833130d1e7ad1dce615839a2ae460c0c22c3df05b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:47:36.97634391Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_9db99b3b-fda3-4d7e-af7c-9d2a73fef3c
6/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a5e2d316737ba53dce808c04d965aa39fe881bc413daaf0421c12d9b4a6aad4a/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/h
osts\",\"host_path\":\"/var/lib/kubelet/pods/9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6/containers/storage-provisioner/dc80955e\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6/volumes/kubernetes.io~secret/storage-provisioner-token-dc2gn\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},
\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-10T22:47:34.781152815Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2f8509834792c5a3a501d018ff69cd005ee92ec3da88b74d6c87546916ccb3b","pid":2783,"status":"running","bundle":"/run/containers/storage/overlay-containers/a2f8509834792c5a3a501d018ff69cd005ee92ec3da88b74d6c87546916ccb3b/userdata","rootfs":"/var/lib/containers/storage/overlay/cbdf4da6d6b27b8e0290e8d0618937d5cb01ec3b11a93733382e8d57e2a4c73f/merged","c
reated":"2021-08-10T22:47:11.78517705Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99930feb","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99930feb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a2f8509834792c5a3a501d018ff69cd005ee92ec3da88b74d6c87546916ccb3b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:47:11.56299848Z","io.kubernetes.cri-o.Image":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube
-scheduler:v1.17.0","io.kubernetes.cri-o.ImageRef":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-20210810224612-345780\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210810224612-345780_bb577061a17ad23cfbbf52e9419bf32a/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cbdf4da6d6b27b8e0290e8d0618937d5cb01ec3b11a93733382e8d57e2a4c73f/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-test-preload-20210810224612-345780_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/58b1dec2e2e327e4b76521e5268dedfc336e726
217b09eee1f2f29e22bd9e328/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-test-preload-20210810224612-345780_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/containers/kube-scheduler/504f1860\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210810224612-345780","io.kubernetes.pod.namespace":"kube-syste
m","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-10T22:47:10.264057792Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf89379660d40768ae622dd72ae2ce1f9f16d0d53c7eb9172f77e823b4b93ce7","pid":3601,"status":"running","bundle":"/run/containers/storage/overlay-containers/bf89379660d40768ae622dd72ae2ce1f9f16d0d53c7eb9172f77e823b4b93ce7/userdata","rootfs":"/var/lib/containers/storage/overlay/c2947034a3d7c3d5fa6308a222721d97196bf5a47cf4bada849bfde7ea46c11c/merged","created":"2021-08-10T22:47:33.857220592Z","annotations":{"app":"kindnet","controller-revision-hash":"59985d8787","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.sourc
e\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-10T22:47:33.453149482Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"bf89379660d40768ae622dd72ae2ce1f9f16d0d53c7eb9172f77e823b4b93ce7","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-d2rsl_kube-system_fbd95970-61b2-4490-b4e4-e228346528b8_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:47:33.776678831Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224612-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/bf89379660d40768ae622dd72ae2ce1f9f16d0d53c7eb9172f77e823b4b93ce7/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-d2rsl","io.kubernetes.cri-o.Labels":"{\"tier\":\"node\",\"pod-template-generation\":\"1\",\"io.kubernetes.pod.uid\":\"fbd95970-61b2-4490-b4e4-e228346528b8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.po
d.name\":\"kindnet-d2rsl\",\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kindnet\",\"controller-revision-hash\":\"59985d8787\",\"app\":\"kindnet\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-d2rsl_fbd95970-61b2-4490-b4e4-e228346528b8/bf89379660d40768ae622dd72ae2ce1f9f16d0d53c7eb9172f77e823b4b93ce7.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-d2rsl\",\"uid\":\"fbd95970-61b2-4490-b4e4-e228346528b8\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c2947034a3d7c3d5fa6308a222721d97196bf5a47cf4bada849bfde7ea46c11c/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-d2rsl_kube-system_fbd95970-61b2-4490-b4e4-e228346528b8_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bf89379660d40768ae622dd72ae2c
e1f9f16d0d53c7eb9172f77e823b4b93ce7/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"bf89379660d40768ae622dd72ae2ce1f9f16d0d53c7eb9172f77e823b4b93ce7","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/bf89379660d40768ae622dd72ae2ce1f9f16d0d53c7eb9172f77e823b4b93ce7/userdata/shm","io.kubernetes.pod.name":"kindnet-d2rsl","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"fbd95970-61b2-4490-b4e4-e228346528b8","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-10T22:47:33.453149482Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce07770ac078d8c9e37ec49e548ffe5c8a04b0318456b56aefb3a74a24d5edff","pid":3594,"status":"running","bundle":"/run/containers/storage/overlay-containers/ce07770ac078d8c9e37ec49e548ffe5c8a04b0318456b56aefb3a74a24d5edff/
userdata","rootfs":"/var/lib/containers/storage/overlay/2e599d0914b489bd1d018c9c789d7f5028ec8273006531f0413d78050c69883c/merged","created":"2021-08-10T22:47:33.857256178Z","annotations":{"controller-revision-hash":"68bd87b66","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-10T22:47:33.450603905Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ce07770ac078d8c9e37ec49e548ffe5c8a04b0318456b56aefb3a74a24d5edff","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-w22dk_kube-system_8332ab8d-3a0d-4152-ad3a-5755f3767d14_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:47:33.774245876Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224612-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ce07770ac078d8c9e37ec49e548ffe5c8a04b
0318456b56aefb3a74a24d5edff/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-w22dk","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-proxy-w22dk\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"68bd87b66\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"8332ab8d-3a0d-4152-ad3a-5755f3767d14\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-w22dk_8332ab8d-3a0d-4152-ad3a-5755f3767d14/ce07770ac078d8c9e37ec49e548ffe5c8a04b0318456b56aefb3a74a24d5edff.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-w22dk\",\"uid\":\"8332ab8d-3a0d-4152-ad3a-5755f3767d14\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2e599d0914b489bd1d018c9c789d7f5028ec8273006531f0413d78050c69883c/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-w22dk_kube-system_8332ab
8d-3a0d-4152-ad3a-5755f3767d14_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ce07770ac078d8c9e37ec49e548ffe5c8a04b0318456b56aefb3a74a24d5edff/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ce07770ac078d8c9e37ec49e548ffe5c8a04b0318456b56aefb3a74a24d5edff","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ce07770ac078d8c9e37ec49e548ffe5c8a04b0318456b56aefb3a74a24d5edff/userdata/shm","io.kubernetes.pod.name":"kube-proxy-w22dk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"8332ab8d-3a0d-4152-ad3a-5755f3767d14","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-10T22:47:33.450603905Z","kubernetes.io/config.source":"api","org.systemd.proper
ty.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110","pid":2646,"status":"running","bundle":"/run/containers/storage/overlay-containers/fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110/userdata","rootfs":"/var/lib/containers/storage/overlay/a7241a662e4768abaa75a1167e91833d6da906884361f4c316bae1014a84d1d4/merged","created":"2021-08-10T22:47:11.405289017Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"f8c1872d6958c845ffffb18f158fd9df\",\"kubernetes.io/config.seen\":\"2021-08-10T22:47:10.264054885Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserve
r-test-preload-20210810224612-345780_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:47:11.278601308Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224612-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-test-preload-20210810224612-345780","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"f8c1872d6958c845ffffb18f158fd9df\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210810224612-345780\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-2021081022461
2-345780_f8c1872d6958c845ffffb18f158fd9df/fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-test-preload-20210810224612-345780\",\"uid\":\"f8c1872d6958c845ffffb18f158fd9df\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a7241a662e4768abaa75a1167e91833d6da906884361f4c316bae1014a84d1d4/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-test-preload-20210810224612-345780_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"fb488292b666bcb70b1610723c4b9
019bb6ab010e3ef553effdd58f83fc04110","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210810224612-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.hash":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.seen":"2021-08-10T22:47:10.264054885Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0810 22:48:18.546288  474799 cri.go:113] list returned 16 containers
	I0810 22:48:18.546307  474799 cri.go:116] container: {ID:1558479f94677371aa5e1fb562a9e7db80d66078445bad166974163be71b4204 Status:running}
	I0810 22:48:18.546320  474799 cri.go:122] skipping {1558479f94677371aa5e1fb562a9e7db80d66078445bad166974163be71b4204 running}: state = "running", want "paused"
	I0810 22:48:18.546340  474799 cri.go:116] container: {ID:17e1217e992869a72af5c74cb768bff66ec39d1b359b87280c6401cd87dede67 Status:running}
	I0810 22:48:18.546345  474799 cri.go:122] skipping {17e1217e992869a72af5c74cb768bff66ec39d1b359b87280c6401cd87dede67 running}: state = "running", want "paused"
	I0810 22:48:18.546353  474799 cri.go:116] container: {ID:38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589 Status:running}
	I0810 22:48:18.546357  474799 cri.go:118] skipping 38bbcbe52a4e3ff6c734cd470d01135a3c7624e227a9063972c520ecb7d10589 - not in ps
	I0810 22:48:18.546364  474799 cri.go:116] container: {ID:3af5046c32f04cc018a14ec9ed80922dc8a73ff55fad564fbbe3085d28446622 Status:running}
	I0810 22:48:18.546368  474799 cri.go:122] skipping {3af5046c32f04cc018a14ec9ed80922dc8a73ff55fad564fbbe3085d28446622 running}: state = "running", want "paused"
	I0810 22:48:18.546375  474799 cri.go:116] container: {ID:41f54e14eb7787b2b1a6660d28aede8abcf9904bda1b31081962969bc7608b71 Status:running}
	I0810 22:48:18.546379  474799 cri.go:122] skipping {41f54e14eb7787b2b1a6660d28aede8abcf9904bda1b31081962969bc7608b71 running}: state = "running", want "paused"
	I0810 22:48:18.546386  474799 cri.go:116] container: {ID:4cb9d92813b0f332450081a886475c6922cbab721c0161456517541adc0b908f Status:running}
	I0810 22:48:18.546390  474799 cri.go:118] skipping 4cb9d92813b0f332450081a886475c6922cbab721c0161456517541adc0b908f - not in ps
	I0810 22:48:18.546396  474799 cri.go:116] container: {ID:4ccccb58a4b54b899dea44511c5c8c98965d448afec164c8c3a17c1a709df7b1 Status:running}
	I0810 22:48:18.546400  474799 cri.go:122] skipping {4ccccb58a4b54b899dea44511c5c8c98965d448afec164c8c3a17c1a709df7b1 running}: state = "running", want "paused"
	I0810 22:48:18.546406  474799 cri.go:116] container: {ID:4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97 Status:running}
	I0810 22:48:18.546411  474799 cri.go:118] skipping 4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97 - not in ps
	I0810 22:48:18.546417  474799 cri.go:116] container: {ID:5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf Status:running}
	I0810 22:48:18.546421  474799 cri.go:118] skipping 5596cda596f7be3ed0ca48ef0070919f9e8acf9ffd5417d8efb65fe81dab59bf - not in ps
	I0810 22:48:18.546426  474799 cri.go:116] container: {ID:58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328 Status:running}
	I0810 22:48:18.546430  474799 cri.go:118] skipping 58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328 - not in ps
	I0810 22:48:18.546436  474799 cri.go:116] container: {ID:638b4c3ba6d67f98714c9d4f7eefb2e94dd4932fe161be0bd7c63bb7dc1f031f Status:running}
	I0810 22:48:18.546441  474799 cri.go:122] skipping {638b4c3ba6d67f98714c9d4f7eefb2e94dd4932fe161be0bd7c63bb7dc1f031f running}: state = "running", want "paused"
	I0810 22:48:18.546453  474799 cri.go:116] container: {ID:746c6be66098b57da4cf5dd833130d1e7ad1dce615839a2ae460c0c22c3df05b Status:running}
	I0810 22:48:18.546463  474799 cri.go:122] skipping {746c6be66098b57da4cf5dd833130d1e7ad1dce615839a2ae460c0c22c3df05b running}: state = "running", want "paused"
	I0810 22:48:18.546469  474799 cri.go:116] container: {ID:a2f8509834792c5a3a501d018ff69cd005ee92ec3da88b74d6c87546916ccb3b Status:running}
	I0810 22:48:18.546474  474799 cri.go:122] skipping {a2f8509834792c5a3a501d018ff69cd005ee92ec3da88b74d6c87546916ccb3b running}: state = "running", want "paused"
	I0810 22:48:18.546480  474799 cri.go:116] container: {ID:bf89379660d40768ae622dd72ae2ce1f9f16d0d53c7eb9172f77e823b4b93ce7 Status:running}
	I0810 22:48:18.546485  474799 cri.go:118] skipping bf89379660d40768ae622dd72ae2ce1f9f16d0d53c7eb9172f77e823b4b93ce7 - not in ps
	I0810 22:48:18.546491  474799 cri.go:116] container: {ID:ce07770ac078d8c9e37ec49e548ffe5c8a04b0318456b56aefb3a74a24d5edff Status:running}
	I0810 22:48:18.546496  474799 cri.go:118] skipping ce07770ac078d8c9e37ec49e548ffe5c8a04b0318456b56aefb3a74a24d5edff - not in ps
	I0810 22:48:18.546502  474799 cri.go:116] container: {ID:fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110 Status:running}
	I0810 22:48:18.546506  474799 cri.go:118] skipping fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110 - not in ps
	I0810 22:48:18.546550  474799 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0810 22:48:18.553998  474799 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0810 22:48:18.554022  474799 kubeadm.go:600] restartCluster start
	I0810 22:48:18.554064  474799 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0810 22:48:18.560584  474799 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0810 22:48:18.561378  474799 kubeconfig.go:93] found "test-preload-20210810224612-345780" server: "https://192.168.49.2:8443"
	I0810 22:48:18.561913  474799 kapi.go:59] client config for test-preload-20210810224612-345780: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20
210810224612-345780/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:48:18.563600  474799 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0810 22:48:18.570811  474799 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-08-10 22:47:07.444530364 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-08-10 22:48:18.193307150 +0000
	@@ -40,7 +40,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.17.0
	+kubernetesVersion: v1.17.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0810 22:48:18.570830  474799 kubeadm.go:1032] stopping kube-system containers ...
	I0810 22:48:18.570843  474799 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0810 22:48:18.570888  474799 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:48:18.594328  474799 cri.go:76] found id: "638b4c3ba6d67f98714c9d4f7eefb2e94dd4932fe161be0bd7c63bb7dc1f031f"
	I0810 22:48:18.594368  474799 cri.go:76] found id: "1558479f94677371aa5e1fb562a9e7db80d66078445bad166974163be71b4204"
	I0810 22:48:18.594374  474799 cri.go:76] found id: "746c6be66098b57da4cf5dd833130d1e7ad1dce615839a2ae460c0c22c3df05b"
	I0810 22:48:18.594378  474799 cri.go:76] found id: "17e1217e992869a72af5c74cb768bff66ec39d1b359b87280c6401cd87dede67"
	I0810 22:48:18.594382  474799 cri.go:76] found id: "3af5046c32f04cc018a14ec9ed80922dc8a73ff55fad564fbbe3085d28446622"
	I0810 22:48:18.594386  474799 cri.go:76] found id: "41f54e14eb7787b2b1a6660d28aede8abcf9904bda1b31081962969bc7608b71"
	I0810 22:48:18.594390  474799 cri.go:76] found id: "a2f8509834792c5a3a501d018ff69cd005ee92ec3da88b74d6c87546916ccb3b"
	I0810 22:48:18.594393  474799 cri.go:76] found id: "4ccccb58a4b54b899dea44511c5c8c98965d448afec164c8c3a17c1a709df7b1"
	I0810 22:48:18.594397  474799 cri.go:76] found id: ""
	I0810 22:48:18.594402  474799 cri.go:221] Stopping containers: [638b4c3ba6d67f98714c9d4f7eefb2e94dd4932fe161be0bd7c63bb7dc1f031f 1558479f94677371aa5e1fb562a9e7db80d66078445bad166974163be71b4204 746c6be66098b57da4cf5dd833130d1e7ad1dce615839a2ae460c0c22c3df05b 17e1217e992869a72af5c74cb768bff66ec39d1b359b87280c6401cd87dede67 3af5046c32f04cc018a14ec9ed80922dc8a73ff55fad564fbbe3085d28446622 41f54e14eb7787b2b1a6660d28aede8abcf9904bda1b31081962969bc7608b71 a2f8509834792c5a3a501d018ff69cd005ee92ec3da88b74d6c87546916ccb3b 4ccccb58a4b54b899dea44511c5c8c98965d448afec164c8c3a17c1a709df7b1]
	I0810 22:48:18.594448  474799 ssh_runner.go:149] Run: which crictl
	I0810 22:48:18.597396  474799 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 638b4c3ba6d67f98714c9d4f7eefb2e94dd4932fe161be0bd7c63bb7dc1f031f 1558479f94677371aa5e1fb562a9e7db80d66078445bad166974163be71b4204 746c6be66098b57da4cf5dd833130d1e7ad1dce615839a2ae460c0c22c3df05b 17e1217e992869a72af5c74cb768bff66ec39d1b359b87280c6401cd87dede67 3af5046c32f04cc018a14ec9ed80922dc8a73ff55fad564fbbe3085d28446622 41f54e14eb7787b2b1a6660d28aede8abcf9904bda1b31081962969bc7608b71 a2f8509834792c5a3a501d018ff69cd005ee92ec3da88b74d6c87546916ccb3b 4ccccb58a4b54b899dea44511c5c8c98965d448afec164c8c3a17c1a709df7b1
	I0810 22:48:19.998160  474799 ssh_runner.go:189] Completed: sudo /usr/bin/crictl stop 638b4c3ba6d67f98714c9d4f7eefb2e94dd4932fe161be0bd7c63bb7dc1f031f 1558479f94677371aa5e1fb562a9e7db80d66078445bad166974163be71b4204 746c6be66098b57da4cf5dd833130d1e7ad1dce615839a2ae460c0c22c3df05b 17e1217e992869a72af5c74cb768bff66ec39d1b359b87280c6401cd87dede67 3af5046c32f04cc018a14ec9ed80922dc8a73ff55fad564fbbe3085d28446622 41f54e14eb7787b2b1a6660d28aede8abcf9904bda1b31081962969bc7608b71 a2f8509834792c5a3a501d018ff69cd005ee92ec3da88b74d6c87546916ccb3b 4ccccb58a4b54b899dea44511c5c8c98965d448afec164c8c3a17c1a709df7b1: (1.400720809s)
	I0810 22:48:19.998239  474799 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0810 22:48:20.008742  474799 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 22:48:20.017313  474799 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5615 Aug 10 22:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5651 Aug 10 22:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2075 Aug 10 22:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5595 Aug 10 22:47 /etc/kubernetes/scheduler.conf
	
	I0810 22:48:20.017382  474799 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0810 22:48:20.024723  474799 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0810 22:48:20.032179  474799 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0810 22:48:20.039427  474799 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0810 22:48:20.046425  474799 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0810 22:48:20.053485  474799 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0810 22:48:20.053516  474799 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:48:20.100379  474799 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:48:20.936740  474799 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:48:21.096469  474799 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:48:21.159143  474799 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:48:21.284013  474799 api_server.go:50] waiting for apiserver process to appear ...
	I0810 22:48:21.284081  474799 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:48:21.861779  474799 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:48:22.361397  474799 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:48:22.861313  474799 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:48:22.884848  474799 api_server.go:70] duration metric: took 1.600832886s to wait for apiserver process to appear ...
	I0810 22:48:22.884880  474799 api_server.go:86] waiting for apiserver healthz status ...
	I0810 22:48:22.884893  474799 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0810 22:48:26.441202  474799 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0810 22:48:26.441324  474799 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0810 22:48:26.941921  474799 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0810 22:48:26.962253  474799 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0810 22:48:26.962302  474799 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0810 22:48:27.442356  474799 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0810 22:48:27.462641  474799 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0810 22:48:27.462676  474799 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0810 22:48:27.942276  474799 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0810 22:48:27.947031  474799 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0810 22:48:27.953243  474799 api_server.go:139] control plane version: v1.17.3
	I0810 22:48:27.953271  474799 api_server.go:129] duration metric: took 5.068384258s to wait for apiserver health ...
	I0810 22:48:27.953281  474799 cni.go:93] Creating CNI manager for ""
	I0810 22:48:27.953287  474799 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:48:27.955806  474799 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0810 22:48:27.955880  474799 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0810 22:48:27.959924  474799 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.17.3/kubectl ...
	I0810 22:48:27.959948  474799 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0810 22:48:27.974346  474799 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.17.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0810 22:48:28.160617  474799 system_pods.go:43] waiting for kube-system pods to appear ...
	I0810 22:48:28.169369  474799 system_pods.go:59] 8 kube-system pods found
	I0810 22:48:28.169398  474799 system_pods.go:61] "coredns-6955765f44-7hn52" [63f5ee2b-efcd-4770-b07a-1935a8d6a75e] Running
	I0810 22:48:28.169402  474799 system_pods.go:61] "etcd-test-preload-20210810224612-345780" [9cadd947-fcd8-47fe-b01b-2fe903d538b8] Running
	I0810 22:48:28.169406  474799 system_pods.go:61] "kindnet-d2rsl" [fbd95970-61b2-4490-b4e4-e228346528b8] Running
	I0810 22:48:28.169410  474799 system_pods.go:61] "kube-apiserver-test-preload-20210810224612-345780" [2992f40a-e117-4337-976f-76a0286ecaea] Pending
	I0810 22:48:28.169418  474799 system_pods.go:61] "kube-controller-manager-test-preload-20210810224612-345780" [87e9c0ec-13c9-4fa6-a5ea-0844457d6840] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0810 22:48:28.169422  474799 system_pods.go:61] "kube-proxy-w22dk" [8332ab8d-3a0d-4152-ad3a-5755f3767d14] Running
	I0810 22:48:28.169427  474799 system_pods.go:61] "kube-scheduler-test-preload-20210810224612-345780" [bd7d7b1c-790a-4877-b730-3d9e41acbb93] Pending / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0810 22:48:28.169431  474799 system_pods.go:61] "storage-provisioner" [9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6] Running
	I0810 22:48:28.169437  474799 system_pods.go:74] duration metric: took 8.794867ms to wait for pod list to return data ...
	I0810 22:48:28.169449  474799 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:48:28.172263  474799 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0810 22:48:28.172287  474799 node_conditions.go:123] node cpu capacity is 8
	I0810 22:48:28.172300  474799 node_conditions.go:105] duration metric: took 2.846583ms to run NodePressure ...
	I0810 22:48:28.172317  474799 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:48:28.398816  474799 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0810 22:48:28.401532  474799 kubeadm.go:746] kubelet initialised
	I0810 22:48:28.401552  474799 kubeadm.go:747] duration metric: took 2.709344ms waiting for restarted kubelet to initialise ...
	I0810 22:48:28.401560  474799 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:48:28.404602  474799 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6955765f44-7hn52" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:28.411677  474799 pod_ready.go:92] pod "coredns-6955765f44-7hn52" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:28.411701  474799 pod_ready.go:81] duration metric: took 7.068642ms waiting for pod "coredns-6955765f44-7hn52" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:28.411711  474799 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:28.414756  474799 pod_ready.go:92] pod "etcd-test-preload-20210810224612-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:28.414772  474799 pod_ready.go:81] duration metric: took 3.055063ms waiting for pod "etcd-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:28.414782  474799 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:29.422963  474799 pod_ready.go:92] pod "kube-apiserver-test-preload-20210810224612-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:29.422993  474799 pod_ready.go:81] duration metric: took 1.0082035s waiting for pod "kube-apiserver-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:29.423005  474799 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:29.426512  474799 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210810224612-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:29.426531  474799 pod_ready.go:81] duration metric: took 3.519701ms waiting for pod "kube-controller-manager-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:29.426542  474799 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w22dk" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:29.762626  474799 pod_ready.go:92] pod "kube-proxy-w22dk" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:29.762646  474799 pod_ready.go:81] duration metric: took 336.097277ms waiting for pod "kube-proxy-w22dk" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:29.762656  474799 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:30.164010  474799 pod_ready.go:92] pod "kube-scheduler-test-preload-20210810224612-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:30.164032  474799 pod_ready.go:81] duration metric: took 401.369778ms waiting for pod "kube-scheduler-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:30.164045  474799 pod_ready.go:38] duration metric: took 1.762471601s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:48:30.164065  474799 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0810 22:48:30.183453  474799 ops.go:34] apiserver oom_adj: -16
	I0810 22:48:30.183478  474799 kubeadm.go:604] restartCluster took 11.6294513s
	I0810 22:48:30.183487  474799 kubeadm.go:392] StartCluster complete in 11.704367656s
	I0810 22:48:30.183505  474799 settings.go:142] acquiring lock: {Name:mka213f92e424859b3fea9ed3e06c1529c3d79c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:48:30.183602  474799 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:48:30.184348  474799 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mk4b0a8134f819d1f0c4fc03757f6964ae0e24de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:48:30.185032  474799 kapi.go:59] client config for test-preload-20210810224612-345780: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20
210810224612-345780/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:48:30.696071  474799 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-20210810224612-345780" rescaled to 1
	I0810 22:48:30.696132  474799 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}
	I0810 22:48:30.698278  474799 out.go:177] * Verifying Kubernetes components...
	I0810 22:48:30.696168  474799 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0810 22:48:30.696198  474799 addons.go:342] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0810 22:48:30.698549  474799 addons.go:59] Setting storage-provisioner=true in profile "test-preload-20210810224612-345780"
	I0810 22:48:30.698572  474799 addons.go:135] Setting addon storage-provisioner=true in "test-preload-20210810224612-345780"
	W0810 22:48:30.698578  474799 addons.go:147] addon storage-provisioner should already be in state true
	I0810 22:48:30.698610  474799 addons.go:59] Setting default-storageclass=true in profile "test-preload-20210810224612-345780"
	I0810 22:48:30.698635  474799 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-20210810224612-345780"
	I0810 22:48:30.698372  474799 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:48:30.698616  474799 host.go:66] Checking if "test-preload-20210810224612-345780" exists ...
	I0810 22:48:30.698999  474799 cli_runner.go:115] Run: docker container inspect test-preload-20210810224612-345780 --format={{.State.Status}}
	I0810 22:48:30.699192  474799 cli_runner.go:115] Run: docker container inspect test-preload-20210810224612-345780 --format={{.State.Status}}
	I0810 22:48:30.750256  474799 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:48:30.750419  474799 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:48:30.750436  474799 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0810 22:48:30.750497  474799 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210810224612-345780
	I0810 22:48:30.751378  474799 kapi.go:59] client config for test-preload-20210810224612-345780: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224612-345780/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20
210810224612-345780/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:48:30.759108  474799 addons.go:135] Setting addon default-storageclass=true in "test-preload-20210810224612-345780"
	W0810 22:48:30.759132  474799 addons.go:147] addon default-storageclass should already be in state true
	I0810 22:48:30.759164  474799 host.go:66] Checking if "test-preload-20210810224612-345780" exists ...
	I0810 22:48:30.759562  474799 cli_runner.go:115] Run: docker container inspect test-preload-20210810224612-345780 --format={{.State.Status}}
	I0810 22:48:30.781796  474799 start.go:716] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0810 22:48:30.781826  474799 node_ready.go:35] waiting up to 6m0s for node "test-preload-20210810224612-345780" to be "Ready" ...
	I0810 22:48:30.784454  474799 node_ready.go:49] node "test-preload-20210810224612-345780" has status "Ready":"True"
	I0810 22:48:30.784474  474799 node_ready.go:38] duration metric: took 2.623283ms waiting for node "test-preload-20210810224612-345780" to be "Ready" ...
	I0810 22:48:30.784484  474799 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:48:30.788430  474799 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6955765f44-7hn52" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:30.796421  474799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224612-345780/id_rsa Username:docker}
	I0810 22:48:30.804102  474799 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0810 22:48:30.804132  474799 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0810 22:48:30.804199  474799 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210810224612-345780
	I0810 22:48:30.844616  474799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33097 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224612-345780/id_rsa Username:docker}
	I0810 22:48:30.890006  474799 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:48:30.935097  474799 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0810 22:48:30.963664  474799 pod_ready.go:92] pod "coredns-6955765f44-7hn52" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:30.963692  474799 pod_ready.go:81] duration metric: took 175.235854ms waiting for pod "coredns-6955765f44-7hn52" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:30.963706  474799 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:31.103217  474799 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0810 22:48:31.103246  474799 addons.go:344] enableAddons completed in 407.059829ms
	I0810 22:48:31.363554  474799 pod_ready.go:92] pod "etcd-test-preload-20210810224612-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:31.363576  474799 pod_ready.go:81] duration metric: took 399.860616ms waiting for pod "etcd-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:31.363589  474799 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:31.763641  474799 pod_ready.go:92] pod "kube-apiserver-test-preload-20210810224612-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:31.763662  474799 pod_ready.go:81] duration metric: took 400.066955ms waiting for pod "kube-apiserver-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:31.763674  474799 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:32.163304  474799 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210810224612-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:32.163326  474799 pod_ready.go:81] duration metric: took 399.644407ms waiting for pod "kube-controller-manager-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:32.163338  474799 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w22dk" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:32.563418  474799 pod_ready.go:92] pod "kube-proxy-w22dk" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:32.563442  474799 pod_ready.go:81] duration metric: took 400.098074ms waiting for pod "kube-proxy-w22dk" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:32.563452  474799 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:32.963828  474799 pod_ready.go:92] pod "kube-scheduler-test-preload-20210810224612-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:48:32.963852  474799 pod_ready.go:81] duration metric: took 400.392315ms waiting for pod "kube-scheduler-test-preload-20210810224612-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:48:32.963867  474799 pod_ready.go:38] duration metric: took 2.179371984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:48:32.963888  474799 api_server.go:50] waiting for apiserver process to appear ...
	I0810 22:48:32.963940  474799 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:48:32.986660  474799 api_server.go:70] duration metric: took 2.290493779s to wait for apiserver process to appear ...
	I0810 22:48:32.986694  474799 api_server.go:86] waiting for apiserver healthz status ...
	I0810 22:48:32.986704  474799 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0810 22:48:32.991164  474799 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0810 22:48:32.991964  474799 api_server.go:139] control plane version: v1.17.3
	I0810 22:48:32.991983  474799 api_server.go:129] duration metric: took 5.283502ms to wait for apiserver health ...
	I0810 22:48:32.991992  474799 system_pods.go:43] waiting for kube-system pods to appear ...
	I0810 22:48:33.165332  474799 system_pods.go:59] 8 kube-system pods found
	I0810 22:48:33.165372  474799 system_pods.go:61] "coredns-6955765f44-7hn52" [63f5ee2b-efcd-4770-b07a-1935a8d6a75e] Running
	I0810 22:48:33.165379  474799 system_pods.go:61] "etcd-test-preload-20210810224612-345780" [9cadd947-fcd8-47fe-b01b-2fe903d538b8] Running
	I0810 22:48:33.165385  474799 system_pods.go:61] "kindnet-d2rsl" [fbd95970-61b2-4490-b4e4-e228346528b8] Running
	I0810 22:48:33.165391  474799 system_pods.go:61] "kube-apiserver-test-preload-20210810224612-345780" [2992f40a-e117-4337-976f-76a0286ecaea] Running
	I0810 22:48:33.165398  474799 system_pods.go:61] "kube-controller-manager-test-preload-20210810224612-345780" [87e9c0ec-13c9-4fa6-a5ea-0844457d6840] Running
	I0810 22:48:33.165403  474799 system_pods.go:61] "kube-proxy-w22dk" [8332ab8d-3a0d-4152-ad3a-5755f3767d14] Running
	I0810 22:48:33.165409  474799 system_pods.go:61] "kube-scheduler-test-preload-20210810224612-345780" [bd7d7b1c-790a-4877-b730-3d9e41acbb93] Running
	I0810 22:48:33.165414  474799 system_pods.go:61] "storage-provisioner" [9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6] Running
	I0810 22:48:33.165422  474799 system_pods.go:74] duration metric: took 173.423382ms to wait for pod list to return data ...
	I0810 22:48:33.165438  474799 default_sa.go:34] waiting for default service account to be created ...
	I0810 22:48:33.363679  474799 default_sa.go:45] found service account: "default"
	I0810 22:48:33.363704  474799 default_sa.go:55] duration metric: took 198.25976ms for default service account to be created ...
	I0810 22:48:33.363712  474799 system_pods.go:116] waiting for k8s-apps to be running ...
	I0810 22:48:33.567477  474799 system_pods.go:86] 8 kube-system pods found
	I0810 22:48:33.567505  474799 system_pods.go:89] "coredns-6955765f44-7hn52" [63f5ee2b-efcd-4770-b07a-1935a8d6a75e] Running
	I0810 22:48:33.567512  474799 system_pods.go:89] "etcd-test-preload-20210810224612-345780" [9cadd947-fcd8-47fe-b01b-2fe903d538b8] Running
	I0810 22:48:33.567516  474799 system_pods.go:89] "kindnet-d2rsl" [fbd95970-61b2-4490-b4e4-e228346528b8] Running
	I0810 22:48:33.567521  474799 system_pods.go:89] "kube-apiserver-test-preload-20210810224612-345780" [2992f40a-e117-4337-976f-76a0286ecaea] Running
	I0810 22:48:33.567525  474799 system_pods.go:89] "kube-controller-manager-test-preload-20210810224612-345780" [87e9c0ec-13c9-4fa6-a5ea-0844457d6840] Running
	I0810 22:48:33.567528  474799 system_pods.go:89] "kube-proxy-w22dk" [8332ab8d-3a0d-4152-ad3a-5755f3767d14] Running
	I0810 22:48:33.567532  474799 system_pods.go:89] "kube-scheduler-test-preload-20210810224612-345780" [bd7d7b1c-790a-4877-b730-3d9e41acbb93] Running
	I0810 22:48:33.567536  474799 system_pods.go:89] "storage-provisioner" [9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6] Running
	I0810 22:48:33.567543  474799 system_pods.go:126] duration metric: took 203.826027ms to wait for k8s-apps to be running ...
	I0810 22:48:33.567550  474799 system_svc.go:44] waiting for kubelet service to be running ....
	I0810 22:48:33.567597  474799 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:48:33.577418  474799 system_svc.go:56] duration metric: took 9.859469ms WaitForService to wait for kubelet.
	I0810 22:48:33.577442  474799 kubeadm.go:547] duration metric: took 2.881283874s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0810 22:48:33.577469  474799 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:48:33.763593  474799 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0810 22:48:33.763618  474799 node_conditions.go:123] node cpu capacity is 8
	I0810 22:48:33.763630  474799 node_conditions.go:105] duration metric: took 186.15703ms to run NodePressure ...
	I0810 22:48:33.763641  474799 start.go:231] waiting for startup goroutines ...
	I0810 22:48:33.811217  474799 start.go:462] kubectl: 1.20.5, cluster: 1.17.3 (minor skew: 3)
	I0810 22:48:33.813606  474799 out.go:177] 
	W0810 22:48:33.813757  474799 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.17.3.
	I0810 22:48:33.815547  474799 out.go:177]   - Want kubectl v1.17.3? Try 'minikube kubectl -- get pods -A'
	I0810 22:48:33.817197  474799 out.go:177] * Done! kubectl is now configured to use "test-preload-20210810224612-345780" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Tue 2021-08-10 22:46:15 UTC, end at Tue 2021-08-10 22:48:35 UTC. --
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.398800946Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.408915007Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.408975474Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.409002641Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.462096572Z" level=info msg="Stopped pod sandbox: 58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328" id=4848ea73-24ac-45e7-87e4-9c5cf03fe426 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.463995496Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.468080107Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.471136110Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.484791745Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.484825830Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.484847329Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.484901789Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.490417083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.495146631Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.495721998Z" level=info msg="Stopped pod sandbox: fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110" id=1bafbf59-0684-4683-8e63-3d8af5fbefcc name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.536071831Z" level=info msg="Stopped pod sandbox: 4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97" id=2a701f38-b15a-4bb2-bcd1-6bc6687f4764 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.557584477Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.569368068Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 10 22:48:27 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:27.569407462Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 10 22:48:29 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:29.258989461Z" level=info msg="Stopping pod sandbox: fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110" id=163bab31-02d1-4252-99bb-de319229b8fd name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:48:29 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:29.259035640Z" level=info msg="Stopping pod sandbox: 4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97" id=578219da-56a7-4a3b-8d26-61cb455db262 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:48:29 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:29.259078765Z" level=info msg="Stopped pod sandbox (already stopped): 4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97" id=578219da-56a7-4a3b-8d26-61cb455db262 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:48:29 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:29.259046571Z" level=info msg="Stopped pod sandbox (already stopped): fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110" id=163bab31-02d1-4252-99bb-de319229b8fd name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:48:29 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:29.258989464Z" level=info msg="Stopping pod sandbox: 58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328" id=7e2e1bd1-50bd-40c2-9339-2981cda78617 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 10 22:48:29 test-preload-20210810224612-345780 crio[4472]: time="2021-08-10 22:48:29.259215690Z" level=info msg="Stopped pod sandbox (already stopped): 58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328" id=7e2e1bd1-50bd-40c2-9339-2981cda78617 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID
	442a46d2bf522       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     8 seconds ago        Running             storage-provisioner       1                   38bbcbe52a4e3
	6034b84394a0a       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                     8 seconds ago        Running             kindnet-cni               1                   bf89379660d40
	45a22ebe6bd16       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61                                     8 seconds ago        Running             coredns                   1                   5596cda596f7b
	9ae649778dc67       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19                                     8 seconds ago        Running             kube-proxy                1                   ce07770ac078d
	82032a984164b       b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302                                     12 seconds ago       Running             kube-controller-manager   0                   2c529c1e844dd
	8b6a0a2e5bfbc       d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad                                     12 seconds ago       Running             kube-scheduler            0                   769e5c949f819
	7be527fefc226       90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b                                     12 seconds ago       Running             kube-apiserver            0                   6e05e438694c8
	21d2633c0d1f9       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                     12 seconds ago       Running             etcd                      1                   4cb9d92813b0f
	638b4c3ba6d67       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61                                     37 seconds ago       Exited              coredns                   0                   5596cda596f7b
	1558479f94677       docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1   55 seconds ago       Exited              kindnet-cni               0                   bf89379660d40
	746c6be66098b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     57 seconds ago       Exited              storage-provisioner       0                   38bbcbe52a4e3
	17e1217e99286       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19                                     About a minute ago   Exited              kube-proxy                0                   ce07770ac078d
	3af5046c32f04       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                     About a minute ago   Exited              etcd                      0                   4cb9d92813b0f
	41f54e14eb778       0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2                                     About a minute ago   Exited              kube-apiserver            0                   fb488292b666b
	a2f8509834792       78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28                                     About a minute ago   Exited              kube-scheduler            0                   58b1dec2e2e32
	4ccccb58a4b54       5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056                                     About a minute ago   Exited              kube-controller-manager   0                   4e83c3884592b
	
	* 
	* ==> coredns [45a22ebe6bd16b606d293d8aeabad3aa139f23bdfeb46c4fa335187b5c3e2cfb] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = ef6277933dc1da9d32a131dbf5945040
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	
	* 
	* ==> coredns [638b4c3ba6d67f98714c9d4f7eefb2e94dd4932fe161be0bd7c63bb7dc1f031f] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = ef6277933dc1da9d32a131dbf5945040
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-20210810224612-345780
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-20210810224612-345780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=test-preload-20210810224612-345780
	                    minikube.k8s.io/updated_at=2021_08_10T22_47_18_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Aug 2021 22:47:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-20210810224612-345780
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Aug 2021 22:48:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Aug 2021 22:48:26 +0000   Tue, 10 Aug 2021 22:47:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Aug 2021 22:48:26 +0000   Tue, 10 Aug 2021 22:47:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Aug 2021 22:48:26 +0000   Tue, 10 Aug 2021 22:47:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Aug 2021 22:48:26 +0000   Tue, 10 Aug 2021 22:47:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    test-preload-20210810224612-345780
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 760e67beb8554645829f2357c8eb4ae7
	  System UUID:                081f7a82-5385-4af0-8ac5-48cf3bb938c1
	  Boot ID:                    73822e98-d94c-4da2-a874-acfa9b587b30
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.17.3
	  Kube-Proxy Version:         v1.17.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6955765f44-7hn52                                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     62s
	  kube-system                 etcd-test-preload-20210810224612-345780                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kindnet-d2rsl                                                 100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      62s
	  kube-system                 kube-apiserver-test-preload-20210810224612-345780             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kube-controller-manager-test-preload-20210810224612-345780    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kube-proxy-w22dk                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-test-preload-20210810224612-345780             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                            Message
	  ----    ------                   ----               ----                                            -------
	  Normal  Starting                 77s                kubelet, test-preload-20210810224612-345780     Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s                kubelet, test-preload-20210810224612-345780     Node test-preload-20210810224612-345780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s                kubelet, test-preload-20210810224612-345780     Node test-preload-20210810224612-345780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s                kubelet, test-preload-20210810224612-345780     Node test-preload-20210810224612-345780 status is now: NodeHasSufficientPID
	  Normal  NodeReady                67s                kubelet, test-preload-20210810224612-345780     Node test-preload-20210810224612-345780 status is now: NodeReady
	  Normal  Starting                 61s                kube-proxy, test-preload-20210810224612-345780  Starting kube-proxy.
	  Normal  Starting                 14s                kubelet, test-preload-20210810224612-345780     Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s (x8 over 14s)  kubelet, test-preload-20210810224612-345780     Node test-preload-20210810224612-345780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet, test-preload-20210810224612-345780     Node test-preload-20210810224612-345780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x8 over 14s)  kubelet, test-preload-20210810224612-345780     Node test-preload-20210810224612-345780 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kube-proxy, test-preload-20210810224612-345780  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug10 22:40] cgroup: cgroup2: unknown option "nsdelegate"
	[ +22.268991] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth6be2f90f
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 76 00 ad fb 17 60 08 06        ......v....`..
	[  +0.000254] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethda187793
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 56 9f fa c9 20 43 08 06        ......V... C..
	[Aug10 22:41] cgroup: cgroup2: unknown option "nsdelegate"
	[ +27.940558] cgroup: cgroup2: unknown option "nsdelegate"
	[  +6.520496] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth59c7b706
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 c1 57 46 88 5d 08 06        ......f.WF.]..
	[Aug10 22:42] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug10 22:43] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethe188940c
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff aa 53 7b 18 04 d5 08 06        .......S{.....
	[  +0.671873] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethaa172110
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff d2 83 eb e0 86 e0 08 06        ..............
	[ +18.062068] cgroup: cgroup2: unknown option "nsdelegate"
	[ +27.455202] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth8796bea3
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 36 3b 86 45 f8 5f 08 06        ......6;.E._..
	[  +2.468146] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug10 22:46] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug10 22:47] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff d2 2d f6 e7 ae 35 08 06        .......-...5..
	[  +0.000003] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff d2 2d f6 e7 ae 35 08 06        .......-...5..
	[ +13.226063] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth51b39a91
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 0e 70 99 43 d3 54 08 06        .......p.C.T..
	
	* 
	* ==> etcd [21d2633c0d1f98a94bce716279772a8ec585afabb54924d1c958a6b56ea30d7d] <==
	* 2021-08-10 22:48:22.191436 I | embed: initial advertise peer URLs = https://192.168.49.2:2380
	2021-08-10 22:48:22.191441 I | embed: initial cluster = 
	2021-08-10 22:48:22.196960 I | etcdserver: restarting member aec36adc501070cc in cluster fa54960ea34d58be at commit index 436
	raft2021/08/10 22:48:22 INFO: aec36adc501070cc switched to configuration voters=()
	raft2021/08/10 22:48:22 INFO: aec36adc501070cc became follower at term 2
	raft2021/08/10 22:48:22 INFO: newRaft aec36adc501070cc [peers: [], term: 2, commit: 436, applied: 0, lastindex: 436, lastterm: 2]
	2021-08-10 22:48:22.261714 W | auth: simple token is not cryptographically signed
	2021-08-10 22:48:22.263772 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2021/08/10 22:48:22 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-10 22:48:22.264601 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-10 22:48:22.264758 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-10 22:48:22.264813 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-10 22:48:22.266494 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-10 22:48:22.266579 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-10 22:48:22.266707 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/10 22:48:24 INFO: aec36adc501070cc is starting a new election at term 2
	raft2021/08/10 22:48:24 INFO: aec36adc501070cc became candidate at term 3
	raft2021/08/10 22:48:24 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3
	raft2021/08/10 22:48:24 INFO: aec36adc501070cc became leader at term 3
	raft2021/08/10 22:48:24 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3
	2021-08-10 22:48:24.098918 I | etcdserver: published {Name:test-preload-20210810224612-345780 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-10 22:48:24.098943 I | embed: ready to serve client requests
	2021-08-10 22:48:24.099100 I | embed: ready to serve client requests
	2021-08-10 22:48:24.101116 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-10 22:48:24.101150 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> etcd [3af5046c32f04cc018a14ec9ed80922dc8a73ff55fad564fbbe3085d28446622] <==
	* 2021-08-10 22:47:11.885830 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/10 22:47:11 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-10 22:47:11.886392 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-10 22:47:11.887805 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-10 22:47:11.887910 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-10 22:47:11.887989 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/10 22:47:12 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/10 22:47:12 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/10 22:47:12 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/10 22:47:12 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/10 22:47:12 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-10 22:47:12.479216 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-10 22:47:12.480457 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-10 22:47:12.480555 I | etcdserver: published {Name:test-preload-20210810224612-345780 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-10 22:47:12.480579 I | embed: ready to serve client requests
	2021-08-10 22:47:12.480642 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-10 22:47:12.480675 I | embed: ready to serve client requests
	2021-08-10 22:47:12.482612 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-10 22:47:12.482697 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-10 22:47:30.792779 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/expand-controller\" " with result "range_response_count:1 size:201" took too long (957.632466ms) to execute
	2021-08-10 22:47:30.792858 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (808.480501ms) to execute
	2021-08-10 22:47:36.661314 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-w22dk\" " with result "range_response_count:1 size:2167" took too long (1.368394216s) to execute
	2021-08-10 22:47:36.661367 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-6955765f44-7hn52\" " with result "range_response_count:1 size:1705" took too long (1.576939814s) to execute
	2021-08-10 22:47:36.793327 W | etcdserver: read-only range request "key:\"/registry/minions/test-preload-20210810224612-345780\" " with result "range_response_count:1 size:3426" took too long (129.382986ms) to execute
	2021-08-10 22:47:36.793462 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-w22dk\" " with result "range_response_count:1 size:2167" took too long (128.622104ms) to execute
	
	* 
	* ==> kernel <==
	*  22:48:35 up  2:31,  0 users,  load average: 2.37, 1.53, 1.79
	Linux test-preload-20210810224612-345780 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [41f54e14eb7787b2b1a6660d28aede8abcf9904bda1b31081962969bc7608b71] <==
	* W0810 22:48:19.344354       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344374       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344390       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344402       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344408       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344415       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344418       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344427       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344448       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344453       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344456       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344459       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344376       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344472       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344492       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344493       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344496       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344510       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344517       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344536       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344537       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344539       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:48:19.344555       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0810 22:48:19.344735       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick
	W0810 22:48:19.344904       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [7be527fefc226202a957112fef32130cec6e4a09215f73a3a46d3299661828f2] <==
	* I0810 22:48:26.428726       1 autoregister_controller.go:140] Starting autoregister controller
	I0810 22:48:26.428993       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0810 22:48:26.429015       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0810 22:48:26.429021       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
	I0810 22:48:26.429342       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0810 22:48:26.429414       1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
	I0810 22:48:26.429458       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0810 22:48:26.429482       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0810 22:48:26.429420       1 controller.go:81] Starting OpenAPI AggregationController
	E0810 22:48:26.464334       1 controller.go:151] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0810 22:48:26.472903       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0810 22:48:26.557314       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
	I0810 22:48:26.557467       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0810 22:48:26.557673       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0810 22:48:26.557950       1 cache.go:39] Caches are synced for autoregister controller
	I0810 22:48:26.558073       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0810 22:48:27.457181       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0810 22:48:27.457208       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0810 22:48:27.457219       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0810 22:48:27.461214       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
	I0810 22:48:28.156102       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0810 22:48:28.252511       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0810 22:48:28.284558       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0810 22:48:28.388667       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0810 22:48:28.393204       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [4ccccb58a4b54b899dea44511c5c8c98965d448afec164c8c3a17c1a709df7b1] <==
	* E0810 22:48:19.534865       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=35&timeout=6m29s&timeoutSeconds=389&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.534956       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://control-plane.minikube.internal:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.534988       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://control-plane.minikube.internal:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=1&timeout=9m37s&timeoutSeconds=577&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535028       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://control-plane.minikube.internal:8443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=401&timeout=5m52s&timeoutSeconds=352&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535070       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=5m25s&timeoutSeconds=325&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535109       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://control-plane.minikube.internal:8443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=146&timeout=7m2s&timeoutSeconds=422&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535141       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://control-plane.minikube.internal:8443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=1&timeout=7m31s&timeoutSeconds=451&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535173       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=413&timeout=7m28s&timeoutSeconds=448&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535198       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=310&timeout=9m17s&timeoutSeconds=557&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535202       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=376&timeout=5m3s&timeoutSeconds=303&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535248       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=352&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535259       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.MutatingWebhookConfiguration: Get https://control-plane.minikube.internal:8443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=8m3s&timeoutSeconds=483&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535274       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PriorityClass: Get https://control-plane.minikube.internal:8443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=44&timeout=9m53s&timeoutSeconds=593&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535287       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=350&timeout=8m21s&timeoutSeconds=501&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535295       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://control-plane.minikube.internal:8443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=1&timeout=9m51s&timeoutSeconds=591&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535322       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=1&timeout=6m6s&timeoutSeconds=366&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535360       1 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://control-plane.minikube.internal:8443/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=41&timeout=8m27s&timeoutSeconds=507&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535366       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=186&timeout=6m17s&timeoutSeconds=377&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535395       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://control-plane.minikube.internal:8443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=354&timeout=7m30s&timeoutSeconds=450&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535438       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=9m16s&timeoutSeconds=556&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535465       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=384&timeout=8m53s&timeoutSeconds=533&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535471       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=400&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535488       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://control-plane.minikube.internal:8443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=1&timeout=8m16s&timeoutSeconds=496&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535628       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=5m17s&timeoutSeconds=317&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535902       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=359&timeout=5m13s&timeoutSeconds=313&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	
	* 
	* ==> kube-controller-manager [82032a984164b11556daea5c17d309792aa6dbb59aa6bca9d23b22daf215b746] <==
	* I0810 22:48:29.721450       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
	I0810 22:48:29.721515       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
	I0810 22:48:29.721533       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
	I0810 22:48:29.721547       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
	I0810 22:48:29.721592       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0810 22:48:29.721607       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
	I0810 22:48:29.721620       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
	W0810 22:48:29.721630       1 shared_informer.go:415] resyncPeriod 56575894989910 is smaller than resyncCheckPeriod 85240461890649 and the informer has already started. Changing it to 85240461890649
	I0810 22:48:29.721659       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
	I0810 22:48:29.721674       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
	I0810 22:48:29.721688       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I0810 22:48:29.721704       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0810 22:48:29.721718       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
	I0810 22:48:29.721735       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
	I0810 22:48:29.721749       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	I0810 22:48:29.721764       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0810 22:48:29.721800       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
	I0810 22:48:29.721819       1 controllermanager.go:533] Started "resourcequota"
	I0810 22:48:29.721855       1 resource_quota_controller.go:271] Starting resource quota controller
	I0810 22:48:29.721877       1 shared_informer.go:197] Waiting for caches to sync for resource quota
	I0810 22:48:29.721901       1 resource_quota_monitor.go:303] QuotaMonitor running
	I0810 22:48:29.726846       1 controllermanager.go:533] Started "ttl"
	I0810 22:48:29.726982       1 ttl_controller.go:116] Starting TTL controller
	I0810 22:48:29.727000       1 shared_informer.go:197] Waiting for caches to sync for TTL
	I0810 22:48:29.769495       1 node_ipam_controller.go:94] Sending events to api server.
	
	* 
	* ==> kube-proxy [17e1217e992869a72af5c74cb768bff66ec39d1b359b87280c6401cd87dede67] <==
	* W0810 22:47:34.159030       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0810 22:47:34.167016       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I0810 22:47:34.167055       1 server_others.go:145] Using iptables Proxier.
	I0810 22:47:34.167311       1 server.go:571] Version: v1.17.0
	I0810 22:47:34.167868       1 config.go:131] Starting endpoints config controller
	I0810 22:47:34.167953       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0810 22:47:34.167903       1 config.go:313] Starting service config controller
	I0810 22:47:34.168135       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0810 22:47:34.268158       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0810 22:47:34.268273       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [9ae649778dc67b48bd659b770976fcf0be86fb734dd22bb034db7ac58e1f5162] <==
	* W0810 22:48:27.087142       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0810 22:48:27.095413       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I0810 22:48:27.095456       1 server_others.go:145] Using iptables Proxier.
	I0810 22:48:27.095844       1 server.go:571] Version: v1.17.0
	I0810 22:48:27.097357       1 config.go:131] Starting endpoints config controller
	I0810 22:48:27.097404       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0810 22:48:27.097647       1 config.go:313] Starting service config controller
	I0810 22:48:27.097659       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0810 22:48:27.197780       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0810 22:48:27.197883       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [8b6a0a2e5bfbcec291847242b24e4974300577cca39b1cdfa43f5b13a6c5b315] <==
	* I0810 22:48:22.893887       1 serving.go:312] Generated self-signed cert in-memory
	W0810 22:48:23.358488       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0810 22:48:23.358555       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0810 22:48:26.569336       1 authorization.go:47] Authorization is disabled
	W0810 22:48:26.569365       1 authentication.go:92] Authentication is disabled
	I0810 22:48:26.569378       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0810 22:48:26.570782       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0810 22:48:26.570926       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0810 22:48:26.570944       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0810 22:48:26.570962       1 tlsconfig.go:219] Starting DynamicServingCertificateController
	I0810 22:48:26.573057       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0810 22:48:26.573080       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0810 22:48:26.671176       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	I0810 22:48:26.673242       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [a2f8509834792c5a3a501d018ff69cd005ee92ec3da88b74d6c87546916ccb3b] <==
	* E0810 22:47:16.578395       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:47:16.578515       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:47:16.579822       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:47:16.580517       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:47:16.581621       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:47:16.582824       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:47:16.583899       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:47:16.585017       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0810 22:47:16.586055       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0810 22:47:16.587175       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:47:16.588362       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:47:16.589532       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0810 22:47:17.673328       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0810 22:48:19.534137       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=186&timeout=8m46s&timeoutSeconds=526&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.534143       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=9m59s&timeoutSeconds=599&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.534228       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=8m11s&timeoutSeconds=491&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.534298       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=35&timeout=5m17s&timeoutSeconds=317&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.534574       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m46s&timeoutSeconds=586&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.534661       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=384&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.534761       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=400&timeout=9m4s&timeoutSeconds=544&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.534811       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=359&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.534902       1 reflector.go:320] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&resourceVersion=398&timeoutSeconds=382&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.534901       1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=154&timeout=5m51s&timeoutSeconds=351&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535202       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=8m50s&timeoutSeconds=530&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0810 22:48:19.535599       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=5m18s&timeoutSeconds=318&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-10 22:46:15 UTC, end at Tue 2021-08-10 22:48:35 UTC. --
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.474710    6439 kubelet.go:1645] Trying to delete pod kube-controller-manager-test-preload-20210810224612-345780_kube-system 02be8a62-efef-44de-ae74-58dad6b76644
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.481031    6439 kubelet_node_status.go:112] Node test-preload-20210810224612-345780 was previously registered
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.481123    6439 kubelet_node_status.go:73] Successfully registered node test-preload-20210810224612-345780
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.482593    6439 kuberuntime_manager.go:981] updating runtime config through cri with podcidr 10.244.0.0/24
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: W0810 22:48:26.482609    6439 kubelet.go:1649] Deleted mirror pod "kube-apiserver-test-preload-20210810224612-345780_kube-system(9b8eade5-e6fc-418f-8318-7e979d877bcd)" because it is outdated
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: W0810 22:48:26.482622    6439 kubelet.go:1649] Deleted mirror pod "kube-controller-manager-test-preload-20210810224612-345780_kube-system(02be8a62-efef-44de-ae74-58dad6b76644)" because it is outdated
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: W0810 22:48:26.482734    6439 kubelet.go:1649] Deleted mirror pod "kube-scheduler-test-preload-20210810224612-345780_kube-system(5801dd14-807a-4b68-93e6-1663d440c244)" because it is outdated
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.483313    6439 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.557437    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-mp8hh" (UniqueName: "kubernetes.io/secret/63f5ee2b-efcd-4770-b07a-1935a8d6a75e-coredns-token-mp8hh") pod "coredns-6955765f44-7hn52" (UID: "63f5ee2b-efcd-4770-b07a-1935a8d6a75e")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.557487    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/63f5ee2b-efcd-4770-b07a-1935a8d6a75e-config-volume") pod "coredns-6955765f44-7hn52" (UID: "63f5ee2b-efcd-4770-b07a-1935a8d6a75e")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: W0810 22:48:26.561828    6439 status_manager.go:546] Failed to update status for pod "kube-apiserver-test-preload-20210810224612-345780_kube-system(9b8eade5-e6fc-418f-8318-7e979d877bcd)": failed to patch status "{\"metadata\":{\"uid\":\"9b8eade5-e6fc-418f-8318-7e979d877bcd\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2021-08-10T22:48:21Z\",\"type\":\"Initialized\"},{\"lastTransitionTime\":\"2021-08-10T22:48:21Z\",\"message\":\"containers with unready status: [kube-apiserver]\",\"reason\":\"ContainersNotReady\",\"status\":\"False\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2021-08-10T22:48:21Z\",\"message\":\"containers with unready status: [kube-apiserver]\",\"reason\":\"ContainersNotReady\",\"status\":\"False\",\"type\":\"ContainersReady\"},{\"lastTransitionTime\":\"2021-08-10T22:
48:21Z\",\"type\":\"PodScheduled\"}],\"containerStatuses\":[{\"image\":\"k8s.gcr.io/kube-apiserver:v1.17.3\",\"imageID\":\"\",\"lastState\":{},\"name\":\"kube-apiserver\",\"ready\":false,\"restartCount\":0,\"started\":false,\"state\":{\"waiting\":{\"reason\":\"ContainerCreating\"}}}],\"phase\":\"Pending\",\"podIPs\":null,\"startTime\":\"2021-08-10T22:48:21Z\"}}" for pod "kube-system"/"kube-apiserver-test-preload-20210810224612-345780": pods "kube-apiserver-test-preload-20210810224612-345780" not found
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.657819    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6-tmp") pod "storage-provisioner" (UID: "9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.657884    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/fbd95970-61b2-4490-b4e4-e228346528b8-lib-modules") pod "kindnet-d2rsl" (UID: "fbd95970-61b2-4490-b4e4-e228346528b8")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.657936    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/8332ab8d-3a0d-4152-ad3a-5755f3767d14-lib-modules") pod "kube-proxy-w22dk" (UID: "8332ab8d-3a0d-4152-ad3a-5755f3767d14")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.657992    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/8332ab8d-3a0d-4152-ad3a-5755f3767d14-kube-proxy") pod "kube-proxy-w22dk" (UID: "8332ab8d-3a0d-4152-ad3a-5755f3767d14")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.658017    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/8332ab8d-3a0d-4152-ad3a-5755f3767d14-xtables-lock") pod "kube-proxy-w22dk" (UID: "8332ab8d-3a0d-4152-ad3a-5755f3767d14")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.658037    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-6m8t5" (UniqueName: "kubernetes.io/secret/8332ab8d-3a0d-4152-ad3a-5755f3767d14-kube-proxy-token-6m8t5") pod "kube-proxy-w22dk" (UID: "8332ab8d-3a0d-4152-ad3a-5755f3767d14")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.658056    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-b8wzn" (UniqueName: "kubernetes.io/secret/fbd95970-61b2-4490-b4e4-e228346528b8-kindnet-token-b8wzn") pod "kindnet-d2rsl" (UID: "fbd95970-61b2-4490-b4e4-e228346528b8")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.658194    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/fbd95970-61b2-4490-b4e4-e228346528b8-xtables-lock") pod "kindnet-d2rsl" (UID: "fbd95970-61b2-4490-b4e4-e228346528b8")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.658245    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-dc2gn" (UniqueName: "kubernetes.io/secret/9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6-storage-provisioner-token-dc2gn") pod "storage-provisioner" (UID: "9db99b3b-fda3-4d7e-af7c-9d2a73fef3c6")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.658363    6439 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/fbd95970-61b2-4490-b4e4-e228346528b8-cni-cfg") pod "kindnet-d2rsl" (UID: "fbd95970-61b2-4490-b4e4-e228346528b8")
	Aug 10 22:48:26 test-preload-20210810224612-345780 kubelet[6439]: I0810 22:48:26.658404    6439 reconciler.go:156] Reconciler: start to sync state
	Aug 10 22:48:28 test-preload-20210810224612-345780 kubelet[6439]: W0810 22:48:28.297116    6439 pod_container_deletor.go:75] Container "4e83c3884592b501206636ccf30e66ea8489d700e164ad798b95f1b93d134a97" not found in pod's containers
	Aug 10 22:48:28 test-preload-20210810224612-345780 kubelet[6439]: W0810 22:48:28.298330    6439 pod_container_deletor.go:75] Container "fb488292b666bcb70b1610723c4b9019bb6ab010e3ef553effdd58f83fc04110" not found in pod's containers
	Aug 10 22:48:28 test-preload-20210810224612-345780 kubelet[6439]: W0810 22:48:28.357780    6439 pod_container_deletor.go:75] Container "58b1dec2e2e327e4b76521e5268dedfc336e726217b09eee1f2f29e22bd9e328" not found in pod's containers
	
	* 
	* ==> storage-provisioner [442a46d2bf5228e66bedfc595af2660d39ed90dfb9c7bf11ac7fa7f95aa941f7] <==
	* I0810 22:48:27.158863       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0810 22:48:27.167717       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0810 22:48:27.167774       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [746c6be66098b57da4cf5dd833130d1e7ad1dce615839a2ae460c0c22c3df05b] <==
	* I0810 22:47:37.184764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0810 22:47:37.192889       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0810 22:47:37.193001       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0810 22:47:37.198184       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0810 22:47:37.198286       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c867027d-790d-459b-a4cc-168110934011", APIVersion:"v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-20210810224612-345780_7d622eeb-5b1a-40be-a11c-646e61191f52 became leader
	I0810 22:47:37.198310       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-20210810224612-345780_7d622eeb-5b1a-40be-a11c-646e61191f52!
	I0810 22:47:37.298887       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-20210810224612-345780_7d622eeb-5b1a-40be-a11c-646e61191f52!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-20210810224612-345780 -n test-preload-20210810224612-345780
helpers_test.go:262: (dbg) Run:  kubectl --context test-preload-20210810224612-345780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context test-preload-20210810224612-345780 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context test-preload-20210810224612-345780 describe pod : exit status 1 (50.086244ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context test-preload-20210810224612-345780 describe pod : exit status 1
helpers_test.go:176: Cleaning up "test-preload-20210810224612-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210810224612-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210810224612-345780: (4.064857263s)
--- FAIL: TestPreload (147.82s)

                                                
                                    
x
+
TestScheduledStopUnix (63.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210810224840-345780 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210810224840-345780 --memory=2048 --driver=docker  --container-runtime=crio: (27.465460142s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210810224840-345780 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210810224840-345780 -n scheduled-stop-20210810224840-345780
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210810224840-345780 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210810224840-345780 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210810224840-345780 -n scheduled-stop-20210810224840-345780
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210810224840-345780
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210810224840-345780 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0810 22:49:22.355604  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210810224840-345780
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210810224840-345780: exit status 3 (2.146346312s)

                                                
                                                
-- stdout --
	scheduled-stop-20210810224840-345780
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 22:49:38.296430  483764 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0810 22:49:38.296478  483764 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
scheduled_stop_test.go:209: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20210810224840-345780
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 22:49:38.296430  483764 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0810 22:49:38.296478  483764 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
panic.go:613: *** TestScheduledStopUnix FAILED at 2021-08-10 22:49:38.300379494 +0000 UTC m=+1808.128863297
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect scheduled-stop-20210810224840-345780
helpers_test.go:236: (dbg) docker inspect scheduled-stop-20210810224840-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ef09c3ac2dfc0509adb9cbed3625806bff84747ba5dfaecd2936731c60beabd2",
	        "Created": "2021-08-10T22:48:41.90115113Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2021-08-10T22:48:42.3687802Z",
	            "FinishedAt": "2021-08-10T22:49:37.980659235Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/ef09c3ac2dfc0509adb9cbed3625806bff84747ba5dfaecd2936731c60beabd2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ef09c3ac2dfc0509adb9cbed3625806bff84747ba5dfaecd2936731c60beabd2/hostname",
	        "HostsPath": "/var/lib/docker/containers/ef09c3ac2dfc0509adb9cbed3625806bff84747ba5dfaecd2936731c60beabd2/hosts",
	        "LogPath": "/var/lib/docker/containers/ef09c3ac2dfc0509adb9cbed3625806bff84747ba5dfaecd2936731c60beabd2/ef09c3ac2dfc0509adb9cbed3625806bff84747ba5dfaecd2936731c60beabd2-json.log",
	        "Name": "/scheduled-stop-20210810224840-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20210810224840-345780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20210810224840-345780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b0df91e1d4b2899f570571afa9a012d52abbf341474a22435dd6ebd6c4d53c5-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b0df91e1d4b2899f570571afa9a012d52abbf341474a22435dd6ebd6c4d53c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b0df91e1d4b2899f570571afa9a012d52abbf341474a22435dd6ebd6c4d53c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b0df91e1d4b2899f570571afa9a012d52abbf341474a22435dd6ebd6c4d53c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20210810224840-345780",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20210810224840-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20210810224840-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20210810224840-345780",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20210810224840-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2768cac76d340ea96cbc3628769386f8ac3baf88667038d2ebf60dad9eeb03f9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/2768cac76d34",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20210810224840-345780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ef09c3ac2dfc"
	                    ],
	                    "NetworkID": "00f4b520081016f1425b25f772e3df103b99868b92590d1393a41394b95ce15d",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210810224840-345780 -n scheduled-stop-20210810224840-345780
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210810224840-345780 -n scheduled-stop-20210810224840-345780: exit status 7 (95.878579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "scheduled-stop-20210810224840-345780" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "scheduled-stop-20210810224840-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210810224840-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20210810224840-345780: (5.459967638s)
--- FAIL: TestScheduledStopUnix (63.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (153.69s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.096376942.exe start -p running-upgrade-20210810224957-345780 --memory=2200 --vm-driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.096376942.exe start -p running-upgrade-20210810224957-345780 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m31.686449989s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210810224957-345780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-20210810224957-345780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (58.402899089s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210810224957-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	* Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node running-upgrade-20210810224957-345780 in cluster running-upgrade-20210810224957-345780
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20210810224957-345780" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:51:29.583449  506642 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:51:29.583593  506642 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:51:29.583606  506642 out.go:311] Setting ErrFile to fd 2...
	I0810 22:51:29.583615  506642 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:51:29.583779  506642 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:51:29.584121  506642 out.go:305] Setting JSON to false
	I0810 22:51:29.627652  506642 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":9251,"bootTime":1628626639,"procs":292,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:51:29.627796  506642 start.go:121] virtualization: kvm guest
	I0810 22:51:29.629840  506642 out.go:177] * [running-upgrade-20210810224957-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:51:29.631463  506642 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:51:29.630041  506642 notify.go:169] Checking for updates...
	I0810 22:51:29.632990  506642 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:51:29.634477  506642 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:51:29.635985  506642 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:51:29.636593  506642 start_flags.go:521] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0810 22:51:29.638513  506642 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0810 22:51:29.638573  506642 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:51:29.690520  506642 docker.go:132] docker version: linux-19.03.15
	I0810 22:51:29.690620  506642 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:51:29.784781  506642 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:65 SystemTime:2021-08-10 22:51:29.73072084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:51:29.784880  506642 docker.go:244] overlay module found
	I0810 22:51:29.786688  506642 out.go:177] * Using the docker driver based on existing profile
	I0810 22:51:29.786717  506642 start.go:278] selected driver: docker
	I0810 22:51:29.786726  506642 start.go:751] validating driver "docker" against &{Name:running-upgrade-20210810224957-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210810224957-345780 Namespace: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:51:29.786853  506642 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0810 22:51:29.786891  506642 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:51:29.786914  506642 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0810 22:51:29.788524  506642 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:51:29.789516  506642 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:51:29.890010  506642 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:63 SystemTime:2021-08-10 22:51:29.835091329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0810 22:51:29.890178  506642 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:51:29.890212  506642 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0810 22:51:29.892569  506642 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:51:29.892661  506642 cni.go:93] Creating CNI manager for ""
	I0810 22:51:29.892676  506642 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0810 22:51:29.892689  506642 start_flags.go:277] config:
	{Name:running-upgrade-20210810224957-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210810224957-345780 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:51:29.894467  506642 out.go:177] * Starting control plane node running-upgrade-20210810224957-345780 in cluster running-upgrade-20210810224957-345780
	I0810 22:51:29.894522  506642 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:51:29.895982  506642 out.go:177] * Pulling base image ...
	I0810 22:51:29.896015  506642 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0810 22:51:29.896111  506642 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	W0810 22:51:29.938777  506642 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0810 22:51:29.939006  506642 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/running-upgrade-20210810224957-345780/config.json ...
	I0810 22:51:29.939143  506642 cache.go:108] acquiring lock: {Name:mk28c058498c99b4b17534611829df81e65c25d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:29.939122  506642 cache.go:108] acquiring lock: {Name:mk2992684e28e28c0a4befdb8ebb26ca589cb57f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:29.939205  506642 cache.go:108] acquiring lock: {Name:mk546e25e00f9c6b1db501b19ce1d580b9427a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:29.939272  506642 cache.go:108] acquiring lock: {Name:mkbdfa3defe6d3385cdc7fd98eb8ed8245d220a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:29.939258  506642 cache.go:108] acquiring lock: {Name:mkaf8647a9f2ed9a02b38694c0a370855867857f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:29.939310  506642 cache.go:108] acquiring lock: {Name:mk424aee259face7c113807a02e8507dd3f19426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:29.939321  506642 cache.go:108] acquiring lock: {Name:mk1ed9aae172927fa6db6af2d662f32af9bc1ad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:29.939380  506642 cache.go:108] acquiring lock: {Name:mk236c609a44f42f8f0ca5c833447492a79e4743 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:29.939378  506642 cache.go:108] acquiring lock: {Name:mkfaf59a6c8ed2536d81961d0198fe2ace0d8c1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:29.939398  506642 cache.go:108] acquiring lock: {Name:mk06ff21464a721667096dff5d67c2caea6f6747 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:29.939429  506642 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
	I0810 22:51:29.939435  506642 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
	I0810 22:51:29.939449  506642 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
	I0810 22:51:29.939457  506642 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 254.94µs
	I0810 22:51:29.939459  506642 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 332.911µs
	I0810 22:51:29.939459  506642 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
	I0810 22:51:29.939471  506642 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0810 22:51:29.939473  506642 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 125.294µs
	I0810 22:51:29.939487  506642 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 108.379µs
	I0810 22:51:29.939489  506642 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
	I0810 22:51:29.939491  506642 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 105.478µs
	I0810 22:51:29.939499  506642 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0810 22:51:29.939508  506642 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0810 22:51:29.939474  506642 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
	I0810 22:51:29.939471  506642 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
	I0810 22:51:29.939499  506642 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
	I0810 22:51:29.939521  506642 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 416.994µs
	I0810 22:51:29.939532  506642 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0810 22:51:29.939535  506642 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0810 22:51:29.939500  506642 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0810 22:51:29.939549  506642 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 409.316µs
	I0810 22:51:29.939561  506642 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0810 22:51:29.939555  506642 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 287.648µs
	I0810 22:51:29.939578  506642 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0810 22:51:29.939590  506642 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
	I0810 22:51:29.939594  506642 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0810 22:51:29.939606  506642 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 287.73µs
	I0810 22:51:29.939618  506642 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
	I0810 22:51:29.939619  506642 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 313.359µs
	I0810 22:51:29.939638  506642 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0810 22:51:29.939653  506642 cache.go:88] Successfully saved all images to host disk.
	I0810 22:51:30.007603  506642 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:51:30.007634  506642 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:51:30.007652  506642 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:51:30.007694  506642 start.go:313] acquiring machines lock for running-upgrade-20210810224957-345780: {Name:mkf65310213422f188bbc6250efb3acf9b31c315 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:30.007821  506642 start.go:317] acquired machines lock for "running-upgrade-20210810224957-345780" in 106.58µs
	I0810 22:51:30.007844  506642 start.go:93] Skipping create...Using existing machine configuration
	I0810 22:51:30.007849  506642 fix.go:55] fixHost starting: m01
	I0810 22:51:30.008091  506642 cli_runner.go:115] Run: docker container inspect running-upgrade-20210810224957-345780 --format={{.State.Status}}
	I0810 22:51:30.052641  506642 fix.go:108] recreateIfNeeded on running-upgrade-20210810224957-345780: state=Running err=<nil>
	W0810 22:51:30.052694  506642 fix.go:134] unexpected machine state, will restart: <nil>
	I0810 22:51:30.055111  506642 out.go:177] * Updating the running docker "running-upgrade-20210810224957-345780" container ...
	I0810 22:51:30.055156  506642 machine.go:88] provisioning docker machine ...
	I0810 22:51:30.055182  506642 ubuntu.go:169] provisioning hostname "running-upgrade-20210810224957-345780"
	I0810 22:51:30.055291  506642 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210810224957-345780
	I0810 22:51:30.106636  506642 main.go:130] libmachine: Using SSH client type: native
	I0810 22:51:30.106804  506642 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I0810 22:51:30.106820  506642 main.go:130] libmachine: About to run SSH command:
	sudo hostname running-upgrade-20210810224957-345780 && echo "running-upgrade-20210810224957-345780" | sudo tee /etc/hostname
	I0810 22:51:30.222226  506642 main.go:130] libmachine: SSH cmd err, output: <nil>: running-upgrade-20210810224957-345780
	
	I0810 22:51:30.222325  506642 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210810224957-345780
	I0810 22:51:30.266577  506642 main.go:130] libmachine: Using SSH client type: native
	I0810 22:51:30.266771  506642 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I0810 22:51:30.266800  506642 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-20210810224957-345780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20210810224957-345780/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-20210810224957-345780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:51:30.373456  506642 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:51:30.373494  506642 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:51:30.373521  506642 ubuntu.go:177] setting up certificates
	I0810 22:51:30.373534  506642 provision.go:83] configureAuth start
	I0810 22:51:30.373601  506642 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20210810224957-345780
	I0810 22:51:30.419737  506642 provision.go:137] copyHostCerts
	I0810 22:51:30.419812  506642 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:51:30.419827  506642 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:51:30.419895  506642 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:51:30.420002  506642 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:51:30.420015  506642 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:51:30.420044  506642 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:51:30.420133  506642 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:51:30.420142  506642 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:51:30.420168  506642 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:51:30.420239  506642 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20210810224957-345780 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-20210810224957-345780]
	I0810 22:51:30.631959  506642 provision.go:171] copyRemoteCerts
	I0810 22:51:30.632031  506642 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:51:30.632082  506642 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210810224957-345780
	I0810 22:51:30.679746  506642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/running-upgrade-20210810224957-345780/id_rsa Username:docker}
	I0810 22:51:30.760592  506642 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0810 22:51:30.778193  506642 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0810 22:51:30.805504  506642 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:51:30.826657  506642 provision.go:86] duration metric: configureAuth took 453.107187ms
	I0810 22:51:30.826694  506642 ubuntu.go:193] setting minikube options for container-runtime
	I0810 22:51:30.827031  506642 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210810224957-345780
	I0810 22:51:30.880400  506642 main.go:130] libmachine: Using SSH client type: native
	I0810 22:51:30.880593  506642 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I0810 22:51:30.880615  506642 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:51:31.277609  506642 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:51:31.277642  506642 machine.go:91] provisioned docker machine in 1.222478671s
	I0810 22:51:31.277653  506642 start.go:267] post-start starting for "running-upgrade-20210810224957-345780" (driver="docker")
	I0810 22:51:31.277661  506642 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:51:31.277735  506642 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:51:31.277778  506642 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210810224957-345780
	I0810 22:51:31.323625  506642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/running-upgrade-20210810224957-345780/id_rsa Username:docker}
	I0810 22:51:31.406945  506642 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:51:31.410137  506642 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0810 22:51:31.410167  506642 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0810 22:51:31.410178  506642 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0810 22:51:31.410185  506642 info.go:137] Remote host: Ubuntu 19.10
	I0810 22:51:31.410200  506642 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:51:31.410254  506642 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:51:31.410437  506642 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> 3457802.pem in /etc/ssl/certs
	I0810 22:51:31.410574  506642 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:51:31.418029  506642 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:51:31.435964  506642 start.go:270] post-start completed in 158.292224ms
	I0810 22:51:31.436029  506642 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:51:31.436076  506642 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210810224957-345780
	I0810 22:51:31.479069  506642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/running-upgrade-20210810224957-345780/id_rsa Username:docker}
	I0810 22:51:31.561900  506642 fix.go:57] fixHost completed within 1.554042026s
	I0810 22:51:31.561932  506642 start.go:80] releasing machines lock for "running-upgrade-20210810224957-345780", held for 1.554096778s
	I0810 22:51:31.562024  506642 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20210810224957-345780
	I0810 22:51:31.618463  506642 ssh_runner.go:149] Run: systemctl --version
	I0810 22:51:31.618508  506642 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:51:31.618527  506642 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210810224957-345780
	I0810 22:51:31.618559  506642 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210810224957-345780
	I0810 22:51:31.665714  506642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/running-upgrade-20210810224957-345780/id_rsa Username:docker}
	I0810 22:51:31.666455  506642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/running-upgrade-20210810224957-345780/id_rsa Username:docker}
	I0810 22:51:31.775412  506642 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:51:31.796640  506642 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:51:31.805746  506642 docker.go:153] disabling docker service ...
	I0810 22:51:31.805836  506642 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:51:31.816831  506642 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:51:31.827495  506642 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:51:31.892068  506642 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:51:31.955393  506642 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:51:31.965035  506642 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:51:31.977403  506642 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
	I0810 22:51:31.989051  506642 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:51:31.995183  506642 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:51:31.995244  506642 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:51:32.001940  506642 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:51:32.007931  506642 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:51:32.067749  506642 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:51:32.163657  506642 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:51:32.163745  506642 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:32.167512  506642 retry.go:31] will retry after 1.104660288s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:51:33.272499  506642 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:33.276141  506642 retry.go:31] will retry after 2.160763633s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:51:35.437475  506642 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:35.441178  506642 retry.go:31] will retry after 2.62026012s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:51:38.063584  506642 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:38.067168  506642 retry.go:31] will retry after 3.164785382s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:51:41.233054  506642 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:41.236743  506642 retry.go:31] will retry after 4.680977329s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:51:45.918563  506642 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:45.922249  506642 retry.go:31] will retry after 9.01243771s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:51:54.934897  506642 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:54.939168  506642 retry.go:31] will retry after 6.442959172s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:52:01.383717  506642 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:52:01.387755  506642 retry.go:31] will retry after 11.217246954s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:52:12.606256  506642 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:52:12.610066  506642 retry.go:31] will retry after 15.299675834s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:52:27.912504  506642 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:52:27.923087  506642 out.go:177] 
	W0810 22:52:27.923235  506642 out.go:242] X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	W0810 22:52:27.923253  506642 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0810 22:52:27.925483  506642 out.go:242] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0810 22:52:27.927148  506642 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:140: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-20210810224957-345780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:613: *** TestRunningBinaryUpgrade FAILED at 2021-08-10 22:52:27.94548203 +0000 UTC m=+1977.773965886
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect running-upgrade-20210810224957-345780
helpers_test.go:236: (dbg) docker inspect running-upgrade-20210810224957-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e7a21ecfeaaf83e07d816be491e4a46be06d7951a471386e3a126ef5c4ea90e0",
	        "Created": "2021-08-10T22:49:58.805349221Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 487406,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-10T22:49:59.461574794Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/e7a21ecfeaaf83e07d816be491e4a46be06d7951a471386e3a126ef5c4ea90e0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e7a21ecfeaaf83e07d816be491e4a46be06d7951a471386e3a126ef5c4ea90e0/hostname",
	        "HostsPath": "/var/lib/docker/containers/e7a21ecfeaaf83e07d816be491e4a46be06d7951a471386e3a126ef5c4ea90e0/hosts",
	        "LogPath": "/var/lib/docker/containers/e7a21ecfeaaf83e07d816be491e4a46be06d7951a471386e3a126ef5c4ea90e0/e7a21ecfeaaf83e07d816be491e4a46be06d7951a471386e3a126ef5c4ea90e0-json.log",
	        "Name": "/running-upgrade-20210810224957-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20210810224957-345780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c080ea5635031a786a0b3219d0f2cf134577ba7ec667916a9e384a0686cf6e2e-init/diff:/var/lib/docker/overlay2/de6af85d43ab6de82a80599c78c852ce945860493e987ae8d4747813e3e12e71/diff:/var/lib/docker/overlay2/1463f2b27e2cf184f9e8a7e127a3f6ecaa9eb4e8c586d13eb98ef0034f418eca/diff:/var/lib/docker/overlay2/6fae380631f93f264fc69450c6bd514661e47e2e598e586796b4ef5487d2609b/diff:/var/lib/docker/overlay2/9455405085a27b776dbc930a9422413a8738ee14a396dba1428ad3477dd78d19/diff:/var/lib/docker/overlay2/872cbd16ad0ea1d1a8643af87081f3ffd14a4cc7bb05e0117ff9630a1e4c2d63/diff:/var/lib/docker/overlay2/1cfe85b8b9110dde1cfd7cd18efd634d01d4c6b46da62d17a26da23aa02686be/diff:/var/lib/docker/overlay2/189b625246c097ae32fa419f11770e2e28b30b39afd65b82dc25c55530584d10/diff:/var/lib/docker/overlay2/f5b5179d9c5187ae940c59c3a026ef190561c0532770dbd761fecfc6251ebc05/diff:/var/lib/docker/overlay2/116a802d8be0890169902c8fcb2ad1b64b5391fa1a060c1f02d344668cf1e40f/diff:/var/lib/docker/overlay2/d335f4
f8874ac51d7120bb297af4bf45b5ab1c41f3977cabfa2149948695c6e9/diff:/var/lib/docker/overlay2/cfc70be91e8c4eaba2033239d05c70abdaaae7922eebe0a9694302cde2259694/diff:/var/lib/docker/overlay2/901fced2d4ec35a47265e02248dd5ae2f3130431109d25e604d2ab568d1bde04/diff:/var/lib/docker/overlay2/7aa7e86939390a956567b669d4bab83fb60927bb30f5a9803342e0d68bd3e23f/diff:/var/lib/docker/overlay2/a482a71267c1aded8aadff398336811f3437dec13bdea6065ac47ad1eb5eed5f/diff:/var/lib/docker/overlay2/972f22e2510a2c07193729807506aedac3ec49bb2063b2b7c3e443b7380a91c5/diff:/var/lib/docker/overlay2/8c845952b97a856c0093d30bbe000f51feda3cb8d3a525e83d8633d5af175938/diff:/var/lib/docker/overlay2/85f0f897ba04db0a863dd2628b8b2e7d3539cecbb6acd1530907b350763c6550/diff:/var/lib/docker/overlay2/f4060f75e85c12bf3ba15020ed3c17665bed2409afc88787b2341c6d5af01040/diff:/var/lib/docker/overlay2/7fa8f93d5ee1866f01fa7288d688713da7f1044a1942eb59534b94cb95cc3d74/diff:/var/lib/docker/overlay2/0d91418cf4c9ce3175fcb432fd443e696caae83859f6d5e10cdfaf102243e189/diff:/var/lib/d
ocker/overlay2/f4f812cd2dd5b0b125eea4bff29d3ed0d34fa877c492159a8b8b6aee1f536d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c080ea5635031a786a0b3219d0f2cf134577ba7ec667916a9e384a0686cf6e2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c080ea5635031a786a0b3219d0f2cf134577ba7ec667916a9e384a0686cf6e2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c080ea5635031a786a0b3219d0f2cf134577ba7ec667916a9e384a0686cf6e2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20210810224957-345780",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20210810224957-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20210810224957-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20210810224957-345780",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20210810224957-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5adf53185af9eea00f3d36bfc20b711904eed36cb29b456b1bf7735c9115c542",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5adf53185af9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "2309ffcb23e732f50df28b8d2dfdba5a79ab8408998aef949252b8e1b2c6edd5",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "640d9308dcebc85224b7dd7d358621581fe848de066c07f085f358e7c00feeeb",
	                    "EndpointID": "2309ffcb23e732f50df28b8d2dfdba5a79ab8408998aef949252b8e1b2c6edd5",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210810224957-345780 -n running-upgrade-20210810224957-345780
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210810224957-345780 -n running-upgrade-20210810224957-345780: exit status 4 (327.843787ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 22:52:28.291876  513753 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20210810224957-345780" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 4 (may be ok)
helpers_test.go:242: "running-upgrade-20210810224957-345780" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:176: Cleaning up "running-upgrade-20210810224957-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210810224957-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210810224957-345780: (2.777276937s)
--- FAIL: TestRunningBinaryUpgrade (153.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (170.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.9.0.349143691.exe start -p stopped-upgrade-20210810224957-345780 --memory=2200 --vm-driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Done: /tmp/minikube-v1.9.0.349143691.exe start -p stopped-upgrade-20210810224957-345780 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m33.295520487s)
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.349143691.exe -p stopped-upgrade-20210810224957-345780 stop
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.349143691.exe -p stopped-upgrade-20210810224957-345780 stop: (11.290478587s)
version_upgrade_test.go:201: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20210810224957-345780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-20210810224957-345780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (1m3.012773213s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210810224957-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	* Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node stopped-upgrade-20210810224957-345780 in cluster stopped-upgrade-20210810224957-345780
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-20210810224957-345780" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:51:42.526721  508103 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:51:42.526925  508103 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:51:42.526933  508103 out.go:311] Setting ErrFile to fd 2...
	I0810 22:51:42.526945  508103 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:51:42.527053  508103 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:51:42.527298  508103 out.go:305] Setting JSON to false
	I0810 22:51:42.566371  508103 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":9264,"bootTime":1628626639,"procs":273,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:51:42.566507  508103 start.go:121] virtualization: kvm guest
	I0810 22:51:42.569246  508103 out.go:177] * [stopped-upgrade-20210810224957-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:51:42.570870  508103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:51:42.569425  508103 notify.go:169] Checking for updates...
	I0810 22:51:42.572534  508103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:51:42.574094  508103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:51:42.575595  508103 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:51:42.576062  508103 start_flags.go:521] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0810 22:51:42.578329  508103 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0810 22:51:42.578401  508103 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:51:42.632627  508103 docker.go:132] docker version: linux-19.03.15
	I0810 22:51:42.632738  508103 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:51:42.723827  508103 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-10 22:51:42.67153278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:51:42.723920  508103 docker.go:244] overlay module found
	I0810 22:51:42.726144  508103 out.go:177] * Using the docker driver based on existing profile
	I0810 22:51:42.726180  508103 start.go:278] selected driver: docker
	I0810 22:51:42.726188  508103 start.go:751] validating driver "docker" against &{Name:stopped-upgrade-20210810224957-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210810224957-345780 Namespace: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.5 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:51:42.726267  508103 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0810 22:51:42.726306  508103 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:51:42.726324  508103 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0810 22:51:42.728022  508103 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:51:42.729045  508103 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:51:42.816697  508103 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-10 22:51:42.767051692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0810 22:51:42.816825  508103 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:51:42.816868  508103 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0810 22:51:42.819153  508103 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:51:42.819234  508103 cni.go:93] Creating CNI manager for ""
	I0810 22:51:42.819244  508103 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0810 22:51:42.819255  508103 start_flags.go:277] config:
	{Name:stopped-upgrade-20210810224957-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210810224957-345780 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.5 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:51:42.821022  508103 out.go:177] * Starting control plane node stopped-upgrade-20210810224957-345780 in cluster stopped-upgrade-20210810224957-345780
	I0810 22:51:42.821069  508103 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:51:42.822554  508103 out.go:177] * Pulling base image ...
	I0810 22:51:42.822622  508103 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0810 22:51:42.822673  508103 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	W0810 22:51:42.868004  508103 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0810 22:51:42.868204  508103 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/stopped-upgrade-20210810224957-345780/config.json ...
	I0810 22:51:42.868335  508103 cache.go:108] acquiring lock: {Name:mk2992684e28e28c0a4befdb8ebb26ca589cb57f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:42.868380  508103 cache.go:108] acquiring lock: {Name:mkaf8647a9f2ed9a02b38694c0a370855867857f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:42.868342  508103 cache.go:108] acquiring lock: {Name:mk546e25e00f9c6b1db501b19ce1d580b9427a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:42.868416  508103 cache.go:108] acquiring lock: {Name:mk28c058498c99b4b17534611829df81e65c25d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:42.868423  508103 cache.go:108] acquiring lock: {Name:mk236c609a44f42f8f0ca5c833447492a79e4743 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:42.868426  508103 cache.go:108] acquiring lock: {Name:mk06ff21464a721667096dff5d67c2caea6f6747 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:42.868479  508103 cache.go:108] acquiring lock: {Name:mkbdfa3defe6d3385cdc7fd98eb8ed8245d220a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:42.868540  508103 cache.go:108] acquiring lock: {Name:mk424aee259face7c113807a02e8507dd3f19426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:42.868543  508103 cache.go:108] acquiring lock: {Name:mk1ed9aae172927fa6db6af2d662f32af9bc1ad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:42.868538  508103 cache.go:108] acquiring lock: {Name:mkfaf59a6c8ed2536d81961d0198fe2ace0d8c1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:42.868666  508103 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
	I0810 22:51:42.868697  508103 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0810 22:51:42.868720  508103 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 403.029µs
	I0810 22:51:42.868739  508103 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0810 22:51:42.868695  508103 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 307.868µs
	I0810 22:51:42.868751  508103 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
	I0810 22:51:42.868670  508103 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0810 22:51:42.868768  508103 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0810 22:51:42.868775  508103 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 240.846µs
	I0810 22:51:42.868790  508103 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0810 22:51:42.868670  508103 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
	I0810 22:51:42.868792  508103 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 455.376µs
	I0810 22:51:42.868809  508103 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 274.558µs
	I0810 22:51:42.868820  508103 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
	I0810 22:51:42.868814  508103 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0810 22:51:42.868837  508103 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0810 22:51:42.868857  508103 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 449.054µs
	I0810 22:51:42.868870  508103 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0810 22:51:42.868898  508103 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
	I0810 22:51:42.868960  508103 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
	I0810 22:51:42.868962  508103 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 603.616µs
	I0810 22:51:42.868981  508103 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
	I0810 22:51:42.868979  508103 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 571.963µs
	I0810 22:51:42.868994  508103 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
	I0810 22:51:42.868911  508103 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0810 22:51:42.869002  508103 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
	I0810 22:51:42.869015  508103 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 479.877µs
	I0810 22:51:42.869029  508103 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
	I0810 22:51:42.869016  508103 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 539.235µs
	I0810 22:51:42.869041  508103 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0810 22:51:42.869052  508103 cache.go:88] Successfully saved all images to host disk.
	I0810 22:51:42.920788  508103 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:51:42.920822  508103 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:51:42.920839  508103 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:51:42.920880  508103 start.go:313] acquiring machines lock for stopped-upgrade-20210810224957-345780: {Name:mkc42bf7adafd6977d5ee4e44a6ad99101c61aad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:51:42.921063  508103 start.go:317] acquired machines lock for "stopped-upgrade-20210810224957-345780" in 156.906µs
	I0810 22:51:42.921090  508103 start.go:93] Skipping create...Using existing machine configuration
	I0810 22:51:42.921097  508103 fix.go:55] fixHost starting: m01
	I0810 22:51:42.921385  508103 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210810224957-345780 --format={{.State.Status}}
	I0810 22:51:42.963473  508103 fix.go:108] recreateIfNeeded on stopped-upgrade-20210810224957-345780: state=Stopped err=<nil>
	W0810 22:51:42.963508  508103 fix.go:134] unexpected machine state, will restart: <nil>
	I0810 22:51:42.966195  508103 out.go:177] * Restarting existing docker container for "stopped-upgrade-20210810224957-345780" ...
	I0810 22:51:42.966284  508103 cli_runner.go:115] Run: docker start stopped-upgrade-20210810224957-345780
	I0810 22:51:43.968486  508103 cli_runner.go:168] Completed: docker start stopped-upgrade-20210810224957-345780: (1.002167799s)
	I0810 22:51:43.968585  508103 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210810224957-345780 --format={{.State.Status}}
	I0810 22:51:44.022658  508103 kic.go:420] container "stopped-upgrade-20210810224957-345780" state is running.
	I0810 22:51:44.189185  508103 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210810224957-345780
	I0810 22:51:44.244082  508103 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/stopped-upgrade-20210810224957-345780/config.json ...
	I0810 22:51:44.244249  508103 machine.go:88] provisioning docker machine ...
	I0810 22:51:44.244274  508103 ubuntu.go:169] provisioning hostname "stopped-upgrade-20210810224957-345780"
	I0810 22:51:44.244322  508103 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210810224957-345780
	I0810 22:51:44.308416  508103 main.go:130] libmachine: Using SSH client type: native
	I0810 22:51:44.308656  508103 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I0810 22:51:44.308678  508103 main.go:130] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-20210810224957-345780 && echo "stopped-upgrade-20210810224957-345780" | sudo tee /etc/hostname
	I0810 22:51:44.310180  508103 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56930->127.0.0.1:33126: read: connection reset by peer
	I0810 22:51:47.438210  508103 main.go:130] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-20210810224957-345780
	
	I0810 22:51:47.438303  508103 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210810224957-345780
	I0810 22:51:47.480484  508103 main.go:130] libmachine: Using SSH client type: native
	I0810 22:51:47.480644  508103 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I0810 22:51:47.480659  508103 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-20210810224957-345780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-20210810224957-345780/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-20210810224957-345780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:51:47.584792  508103 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:51:47.584833  508103 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:51:47.584888  508103 ubuntu.go:177] setting up certificates
	I0810 22:51:47.584900  508103 provision.go:83] configureAuth start
	I0810 22:51:47.584990  508103 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210810224957-345780
	I0810 22:51:47.643725  508103 provision.go:137] copyHostCerts
	I0810 22:51:47.643790  508103 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:51:47.643802  508103 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:51:47.643853  508103 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:51:47.643929  508103 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:51:47.643940  508103 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:51:47.643978  508103 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:51:47.644041  508103 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:51:47.644051  508103 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:51:47.644070  508103 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:51:47.644122  508103 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-20210810224957-345780 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-20210810224957-345780]
	I0810 22:51:47.910500  508103 provision.go:171] copyRemoteCerts
	I0810 22:51:47.910578  508103 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:51:47.910627  508103 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210810224957-345780
	I0810 22:51:47.953701  508103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/stopped-upgrade-20210810224957-345780/id_rsa Username:docker}
	I0810 22:51:48.036579  508103 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0810 22:51:48.053623  508103 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:51:48.071446  508103 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0810 22:51:48.090852  508103 provision.go:86] duration metric: configureAuth took 505.933622ms
	I0810 22:51:48.090881  508103 ubuntu.go:193] setting minikube options for container-runtime
	I0810 22:51:48.091260  508103 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210810224957-345780
	I0810 22:51:48.141575  508103 main.go:130] libmachine: Using SSH client type: native
	I0810 22:51:48.141753  508103 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I0810 22:51:48.141771  508103 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:51:48.858588  508103 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:51:48.858625  508103 machine.go:91] provisioned docker machine in 4.614358064s
	I0810 22:51:48.858638  508103 start.go:267] post-start starting for "stopped-upgrade-20210810224957-345780" (driver="docker")
	I0810 22:51:48.858647  508103 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:51:48.858727  508103 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:51:48.858776  508103 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210810224957-345780
	I0810 22:51:48.922070  508103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/stopped-upgrade-20210810224957-345780/id_rsa Username:docker}
	I0810 22:51:49.005138  508103 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:51:49.008372  508103 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0810 22:51:49.008402  508103 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0810 22:51:49.008415  508103 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0810 22:51:49.008423  508103 info.go:137] Remote host: Ubuntu 19.10
	I0810 22:51:49.008435  508103 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:51:49.008494  508103 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:51:49.008594  508103 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> 3457802.pem in /etc/ssl/certs
	I0810 22:51:49.008729  508103 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:51:49.015795  508103 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:51:49.038780  508103 start.go:270] post-start completed in 180.121775ms
	I0810 22:51:49.038874  508103 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:51:49.038925  508103 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210810224957-345780
	I0810 22:51:49.084982  508103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/stopped-upgrade-20210810224957-345780/id_rsa Username:docker}
	I0810 22:51:49.161918  508103 fix.go:57] fixHost completed within 6.240811488s
	I0810 22:51:49.161947  508103 start.go:80] releasing machines lock for "stopped-upgrade-20210810224957-345780", held for 6.240868134s
	I0810 22:51:49.162051  508103 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210810224957-345780
	I0810 22:51:49.206958  508103 ssh_runner.go:149] Run: systemctl --version
	I0810 22:51:49.207019  508103 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:51:49.207036  508103 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210810224957-345780
	I0810 22:51:49.207087  508103 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210810224957-345780
	I0810 22:51:49.251125  508103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/stopped-upgrade-20210810224957-345780/id_rsa Username:docker}
	I0810 22:51:49.253288  508103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/stopped-upgrade-20210810224957-345780/id_rsa Username:docker}
	I0810 22:51:49.359018  508103 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:51:49.380424  508103 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:51:49.391348  508103 docker.go:153] disabling docker service ...
	I0810 22:51:49.391404  508103 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:51:49.404605  508103 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:51:49.415485  508103 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:51:49.475613  508103 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:51:49.542786  508103 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:51:49.553543  508103 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:51:49.574100  508103 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
	I0810 22:51:49.582652  508103 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:51:49.589196  508103 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:51:49.589259  508103 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:51:49.597067  508103 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:51:49.603955  508103 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:51:49.649866  508103 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:51:49.721082  508103 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:51:49.721166  508103 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:49.724430  508103 retry.go:31] will retry after 1.104660288s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:51:50.829758  508103 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:50.833676  508103 retry.go:31] will retry after 2.160763633s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:51:52.994748  508103 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:52.998398  508103 retry.go:31] will retry after 2.62026012s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:51:55.621073  508103 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:55.625551  508103 retry.go:31] will retry after 3.164785382s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:51:58.791454  508103 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:51:58.795926  508103 retry.go:31] will retry after 4.680977329s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:52:03.477769  508103 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:52:03.481222  508103 retry.go:31] will retry after 9.01243771s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:52:12.494401  508103 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:52:12.498133  508103 retry.go:31] will retry after 6.442959172s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:52:18.942239  508103 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:52:18.945818  508103 retry.go:31] will retry after 11.217246954s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:52:30.165037  508103 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:52:30.168670  508103 retry.go:31] will retry after 15.299675834s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0810 22:52:45.469073  508103 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:52:45.476242  508103 out.go:177] 
	W0810 22:52:45.476420  508103 out.go:242] X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	W0810 22:52:45.476435  508103 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0810 22:52:45.479039  508103 out.go:242] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0810 22:52:45.480331  508103 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:203: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-20210810224957-345780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:613: *** TestStoppedBinaryUpgrade FAILED at 2021-08-10 22:52:45.499600127 +0000 UTC m=+1995.328083953
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStoppedBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect stopped-upgrade-20210810224957-345780
helpers_test.go:236: (dbg) docker inspect stopped-upgrade-20210810224957-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "187be9293053d360039290f3e0f399d433efc120828e8b581c1fbddcd70a125d",
	        "Created": "2021-08-10T22:50:03.98149245Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 508356,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-10T22:51:43.959434613Z",
	            "FinishedAt": "2021-08-10T22:51:42.120642042Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/187be9293053d360039290f3e0f399d433efc120828e8b581c1fbddcd70a125d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/187be9293053d360039290f3e0f399d433efc120828e8b581c1fbddcd70a125d/hostname",
	        "HostsPath": "/var/lib/docker/containers/187be9293053d360039290f3e0f399d433efc120828e8b581c1fbddcd70a125d/hosts",
	        "LogPath": "/var/lib/docker/containers/187be9293053d360039290f3e0f399d433efc120828e8b581c1fbddcd70a125d/187be9293053d360039290f3e0f399d433efc120828e8b581c1fbddcd70a125d-json.log",
	        "Name": "/stopped-upgrade-20210810224957-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "stopped-upgrade-20210810224957-345780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8fdd1e557d906df641a6d9d14c60f02311a1c7966041b386043326c2677b2967-init/diff:/var/lib/docker/overlay2/de6af85d43ab6de82a80599c78c852ce945860493e987ae8d4747813e3e12e71/diff:/var/lib/docker/overlay2/1463f2b27e2cf184f9e8a7e127a3f6ecaa9eb4e8c586d13eb98ef0034f418eca/diff:/var/lib/docker/overlay2/6fae380631f93f264fc69450c6bd514661e47e2e598e586796b4ef5487d2609b/diff:/var/lib/docker/overlay2/9455405085a27b776dbc930a9422413a8738ee14a396dba1428ad3477dd78d19/diff:/var/lib/docker/overlay2/872cbd16ad0ea1d1a8643af87081f3ffd14a4cc7bb05e0117ff9630a1e4c2d63/diff:/var/lib/docker/overlay2/1cfe85b8b9110dde1cfd7cd18efd634d01d4c6b46da62d17a26da23aa02686be/diff:/var/lib/docker/overlay2/189b625246c097ae32fa419f11770e2e28b30b39afd65b82dc25c55530584d10/diff:/var/lib/docker/overlay2/f5b5179d9c5187ae940c59c3a026ef190561c0532770dbd761fecfc6251ebc05/diff:/var/lib/docker/overlay2/116a802d8be0890169902c8fcb2ad1b64b5391fa1a060c1f02d344668cf1e40f/diff:/var/lib/docker/overlay2/d335f4
f8874ac51d7120bb297af4bf45b5ab1c41f3977cabfa2149948695c6e9/diff:/var/lib/docker/overlay2/cfc70be91e8c4eaba2033239d05c70abdaaae7922eebe0a9694302cde2259694/diff:/var/lib/docker/overlay2/901fced2d4ec35a47265e02248dd5ae2f3130431109d25e604d2ab568d1bde04/diff:/var/lib/docker/overlay2/7aa7e86939390a956567b669d4bab83fb60927bb30f5a9803342e0d68bd3e23f/diff:/var/lib/docker/overlay2/a482a71267c1aded8aadff398336811f3437dec13bdea6065ac47ad1eb5eed5f/diff:/var/lib/docker/overlay2/972f22e2510a2c07193729807506aedac3ec49bb2063b2b7c3e443b7380a91c5/diff:/var/lib/docker/overlay2/8c845952b97a856c0093d30bbe000f51feda3cb8d3a525e83d8633d5af175938/diff:/var/lib/docker/overlay2/85f0f897ba04db0a863dd2628b8b2e7d3539cecbb6acd1530907b350763c6550/diff:/var/lib/docker/overlay2/f4060f75e85c12bf3ba15020ed3c17665bed2409afc88787b2341c6d5af01040/diff:/var/lib/docker/overlay2/7fa8f93d5ee1866f01fa7288d688713da7f1044a1942eb59534b94cb95cc3d74/diff:/var/lib/docker/overlay2/0d91418cf4c9ce3175fcb432fd443e696caae83859f6d5e10cdfaf102243e189/diff:/var/lib/d
ocker/overlay2/f4f812cd2dd5b0b125eea4bff29d3ed0d34fa877c492159a8b8b6aee1f536d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fdd1e557d906df641a6d9d14c60f02311a1c7966041b386043326c2677b2967/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fdd1e557d906df641a6d9d14c60f02311a1c7966041b386043326c2677b2967/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fdd1e557d906df641a6d9d14c60f02311a1c7966041b386043326c2677b2967/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "stopped-upgrade-20210810224957-345780",
	                "Source": "/var/lib/docker/volumes/stopped-upgrade-20210810224957-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "stopped-upgrade-20210810224957-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "stopped-upgrade-20210810224957-345780",
	                "name.minikube.sigs.k8s.io": "stopped-upgrade-20210810224957-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "568ce3416d71e829ba5e3998268565cc5b70fe92c24ef69d44f48fdbcb31e4dc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/568ce3416d71",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "d881f85f8fc9fd66afec951656760a891c3eaffb4ce45a2d277c79275d23d45e",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "640d9308dcebc85224b7dd7d358621581fe848de066c07f085f358e7c00feeeb",
	                    "EndpointID": "d881f85f8fc9fd66afec951656760a891c3eaffb4ce45a2d277c79275d23d45e",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210810224957-345780 -n stopped-upgrade-20210810224957-345780
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210810224957-345780 -n stopped-upgrade-20210810224957-345780: exit status 6 (311.163786ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 22:52:45.844821  518563 status.go:413] kubeconfig endpoint: extract IP: "stopped-upgrade-20210810224957-345780" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 6 (may be ok)
helpers_test.go:242: "stopped-upgrade-20210810224957-345780" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:176: Cleaning up "stopped-upgrade-20210810224957-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p stopped-upgrade-20210810224957-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20210810224957-345780: (2.415980963s)
--- FAIL: TestStoppedBinaryUpgrade (170.86s)

                                                
                                    
x
+
TestKubernetesUpgrade (3423.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210810224957-345780 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210810224957-345780 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (56.248953144s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210810224957-345780

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210810224957-345780: signal: killed (54m3.75362269s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-20210810224957-345780"  ...
	* Powering off "kubernetes-upgrade-20210810224957-345780" via SSH ...

                                                
                                                
-- /stdout --
version_upgrade_test.go:231: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210810224957-345780 failed: signal: killed
panic.go:613: *** TestKubernetesUpgrade FAILED at 2021-08-10 23:44:57.414934765 +0000 UTC m=+5127.243418563
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect kubernetes-upgrade-20210810224957-345780
helpers_test.go:236: (dbg) docker inspect kubernetes-upgrade-20210810224957-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7562fcd36088b65e738a362215ad01d421bbe18ec1823ecafb06b838720cdd6",
	        "Created": "2021-08-10T22:50:00.805934795Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488036,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-10T22:50:01.659718615Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/c7562fcd36088b65e738a362215ad01d421bbe18ec1823ecafb06b838720cdd6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7562fcd36088b65e738a362215ad01d421bbe18ec1823ecafb06b838720cdd6/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7562fcd36088b65e738a362215ad01d421bbe18ec1823ecafb06b838720cdd6/hosts",
	        "LogPath": "/var/lib/docker/containers/c7562fcd36088b65e738a362215ad01d421bbe18ec1823ecafb06b838720cdd6/c7562fcd36088b65e738a362215ad01d421bbe18ec1823ecafb06b838720cdd6-json.log",
	        "Name": "/kubernetes-upgrade-20210810224957-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20210810224957-345780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20210810224957-345780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/753cb8a259af1a00a878bb7d3fb151a75ac8329ae0f385a33c13a10ac12b9e88-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/753cb8a259af1a00a878bb7d3fb151a75ac8329ae0f385a33c13a10ac12b9e88/merged",
	                "UpperDir": "/var/lib/docker/overlay2/753cb8a259af1a00a878bb7d3fb151a75ac8329ae0f385a33c13a10ac12b9e88/diff",
	                "WorkDir": "/var/lib/docker/overlay2/753cb8a259af1a00a878bb7d3fb151a75ac8329ae0f385a33c13a10ac12b9e88/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20210810224957-345780",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20210810224957-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20210810224957-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20210810224957-345780",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20210810224957-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfffa8ebad5d496c9369b81601b3e9e7a9b4aad5e15baadaa12ee826aa4617f4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cfffa8ebad5d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20210810224957-345780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c7562fcd3608"
	                    ],
	                    "NetworkID": "08818fa49fb24311cfc7f2d7db760373beb2232454892b43e21517e470bf87f5",
	                    "EndpointID": "970c7dd79a80b69f4ad5103b785c6ba95925076544d2284fcc89487f7a557dbb",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20210810224957-345780 -n kubernetes-upgrade-20210810224957-345780

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20210810224957-345780 -n kubernetes-upgrade-20210810224957-345780: exit status 3 (3.33846747s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 23:45:00.799723  620794 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:33374->127.0.0.1:33119: read: connection reset by peer
	E0810 23:45:00.799743  620794 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:33374->127.0.0.1:33119: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "kubernetes-upgrade-20210810224957-345780" host is not running, skipping log retrieval (state="Error")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210810224957-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210810224957-345780

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210810224957-345780: signal: killed (2m0.003125314s)

                                                
                                                
-- stdout --
	* Deleting "kubernetes-upgrade-20210810224957-345780" in docker ...
	* Deleting container "kubernetes-upgrade-20210810224957-345780" ...

                                                
                                                
-- /stdout --
helpers_test.go:181: failed cleanup: signal: killed
--- FAIL: TestKubernetesUpgrade (3423.39s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (34.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210810225233-345780 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210810225233-345780 --alsologtostderr -v=5: exit status 80 (9.362047625s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210810225233-345780 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:54:26.078356  539635 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:54:26.078509  539635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:54:26.078523  539635 out.go:311] Setting ErrFile to fd 2...
	I0810 22:54:26.078528  539635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:54:26.078723  539635 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:54:26.078996  539635 out.go:305] Setting JSON to false
	I0810 22:54:26.079029  539635 mustload.go:65] Loading cluster: pause-20210810225233-345780
	I0810 22:54:26.080087  539635 cli_runner.go:115] Run: docker container inspect pause-20210810225233-345780 --format={{.State.Status}}
	I0810 22:54:26.127212  539635 host.go:66] Checking if "pause-20210810225233-345780" exists ...
	I0810 22:54:26.127939  539635 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192
.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628238775-12122/minikube-v1.22.0-1628238775-12122.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628238775-12122.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-sh
ares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210810225233-345780 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0810 22:54:26.130535  539635 out.go:177] * Pausing node pause-20210810225233-345780 ... 
	I0810 22:54:26.130583  539635 host.go:66] Checking if "pause-20210810225233-345780" exists ...
	I0810 22:54:26.130966  539635 ssh_runner.go:149] Run: systemctl --version
	I0810 22:54:26.131024  539635 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210810225233-345780
	I0810 22:54:26.195168  539635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/pause-20210810225233-345780/id_rsa Username:docker}
	I0810 22:54:26.288875  539635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:54:26.298830  539635 pause.go:50] kubelet running: true
	I0810 22:54:26.298882  539635 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0810 22:54:31.429526  539635 ssh_runner.go:189] Completed: sudo systemctl disable --now kubelet: (5.130618528s)
	I0810 22:54:31.936238  539635 cri.go:41] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0810 22:54:31.936340  539635 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0810 22:54:32.028744  539635 cri.go:76] found id: "26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d"
	I0810 22:54:32.028778  539635 cri.go:76] found id: "27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0"
	I0810 22:54:32.028786  539635 cri.go:76] found id: "3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012"
	I0810 22:54:32.028792  539635 cri.go:76] found id: "15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356"
	I0810 22:54:32.028798  539635 cri.go:76] found id: "46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc"
	I0810 22:54:32.028805  539635 cri.go:76] found id: "a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316"
	I0810 22:54:32.028811  539635 cri.go:76] found id: "22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32"
	I0810 22:54:32.028819  539635 cri.go:76] found id: "57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714"
	I0810 22:54:32.028825  539635 cri.go:76] found id: ""
	I0810 22:54:32.028871  539635 ssh_runner.go:149] Run: sudo runc list -f json
	I0810 22:54:32.070265  539635 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356","pid":2165,"status":"running","bundle":"/run/containers/storage/overlay-containers/15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356/userdata","rootfs":"/var/lib/containers/storage/overlay/071c4e44af528aeace7f08903f94db76d8f44aff75afa7a91f7d3b9856a0864e/merged","created":"2021-08-10T22:53:21.157739195Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"26776f60","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"26776f60\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termination
MessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:53:20.877927351Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-pwcm8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6dee8a63-1575-445b-9978-c72ad86a1d79\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pwcm8_6dee8a63-1575-445b-9978-c72ad86a1d79/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/c
ontainers/storage/overlay/071c4e44af528aeace7f08903f94db76d8f44aff75afa7a91f7d3b9856a0864e/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-pwcm8_kube-system_6dee8a63-1575-445b-9978-c72ad86a1d79_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-pwcm8_kube-system_6dee8a63-1575-445b-9978-c72ad86a1d79_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pod
s/6dee8a63-1575-445b-9978-c72ad86a1d79/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6dee8a63-1575-445b-9978-c72ad86a1d79/containers/kube-proxy/5278d268\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/6dee8a63-1575-445b-9978-c72ad86a1d79/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/6dee8a63-1575-445b-9978-c72ad86a1d79/volumes/kubernetes.io~projected/kube-api-access-62t24\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-pwcm8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6dee8a63-1575-445b-9978-c72ad86a1d79","kubernetes.io/config.seen":"2021-08-10T22:53:19.710142604Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property
.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe","pid":1180,"status":"running","bundle":"/run/containers/storage/overlay-containers/1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe/userdata","rootfs":"/var/lib/containers/storage/overlay/5c0f9493127bf73e4dfba6fe76a3f2c264b05d896aac47faa68d474488dd2858/merged","created":"2021-08-10T22:52:51.517283387Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"160ac3a3a89c5d2b6c0448032f32313f\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.49.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-10T22:52:49.970645682Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe","io.kubernetes.cri
-o.ContainerName":"k8s_POD_etcd-pause-20210810225233-345780_kube-system_160ac3a3a89c5d2b6c0448032f32313f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:52:51.357977899Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210810225233-345780","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"160ac3a3a89c5d2b6c0448032f32313f\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210810225233-345780\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210810225233-345780_160ac3a3a89c5d2b6c0448032f32313f/1be
5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210810225233-345780\",\"uid\":\"160ac3a3a89c5d2b6c0448032f32313f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5c0f9493127bf73e4dfba6fe76a3f2c264b05d896aac47faa68d474488dd2858/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210810225233-345780_kube-system_160ac3a3a89c5d2b6c0448032f32313f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe","io.kubernetes.cri-o.SeccompProfilePath":"
runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"160ac3a3a89c5d2b6c0448032f32313f","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"160ac3a3a89c5d2b6c0448032f32313f","kubernetes.io/config.seen":"2021-08-10T22:52:49.970645682Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32","pid":1335,"status":"running","bundle":"/run/containers/storage/overlay-containers/22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32/userdata","rootfs":"/var/lib/containers/storage/overlay/e391f330ab4e80cdf4e04fe94b6cf5e402b735ee77b3ddfb6a0
639a4c8b9844c/merged","created":"2021-08-10T22:52:52.741191479Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8cf05ddb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8cf05ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:52:52.453374204Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.I
mageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210810225233-345780\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b674a9b0919cd96b03ee6b9415bb734d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210810225233-345780_b674a9b0919cd96b03ee6b9415bb734d/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e391f330ab4e80cdf4e04fe94b6cf5e402b735ee77b3ddfb6a0639a4c8b9844c/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210810225233-345780_kube-system_b674a9b0919cd96b03ee6b9415bb734d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a6593388812907c340a5fa020cc0b4facf
94f44d79e341784e598653c7ce7e75/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210810225233-345780_kube-system_b674a9b0919cd96b03ee6b9415bb734d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b674a9b0919cd96b03ee6b9415bb734d/containers/kube-apiserver/d568f8a1\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b674a9b0919cd96b03ee6b9415bb734d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/
ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b674a9b0919cd96b03ee6b9415bb734d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"b674a9b0919cd96b03ee6b9415bb734d","kubernetes.io/config.seen":"2021-08-10T22:52:49.970647167Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d","pid":3721,"status":"run
ning","bundle":"/run/containers/storage/overlay-containers/26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d/userdata","rootfs":"/var/lib/containers/storage/overlay/96a65636ee386c6cc0fd629c9f54aac7dd527b612c86286bc34b46437b7530fc/merged","created":"2021-08-10T22:54:23.625273439Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d2ba865c","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d2ba865c\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd5
19247d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:54:23.480651263Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2c07bcdc-3e04-4897-98d3-e3c2e9120858\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_2c07bcdc-3e04-4897-98d3-e3c2e9120858/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/96a65636ee386c6cc0fd629c9f54aac7dd527b612c86286bc34b46437b7530fc/merged","io.kubernetes.cri-o.Name":
"k8s_storage-provisioner_storage-provisioner_kube-system_2c07bcdc-3e04-4897-98d3-e3c2e9120858_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_2c07bcdc-3e04-4897-98d3-e3c2e9120858_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2c07bcdc-3e04-4897-98d3-e3c2e9120858/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2c07bcdc-3e04-4897-98d3-e3c2e9120858/containers/storage-provisioner/f1
a729d3\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/2c07bcdc-3e04-4897-98d3-e3c2e9120858/volumes/kubernetes.io~projected/kube-api-access-q9drc\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2c07bcdc-3e04-4897-98d3-e3c2e9120858","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":tr
ue,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-10T22:54:23.009652002Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0","pid":2843,"status":"running","bundle":"/run/containers/storage/overlay-containers/27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0/userdata","rootfs":"/var/lib/containers/storage/overlay/d7591e1606bf21649f0fedd57d3912d3415f751c2a39e49367c56c46960a73b8/merged","created":"2021-08-10T22:54:15.733268009Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e46eedbb","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e46eedbb\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0","io.kuberne
tes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:54:15.601252913Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-9tljg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"02063573-f956-476d-9bae-54c2abbf38ec\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-9tljg_02063573-f956-476d-9bae-54c2abbf38ec/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d7591e1606bf21649f0fedd57d3912d3415f751c2a39e49367c56c46960a73b8/merged","io.kubernetes.cri-o.Name":"k8s_coredns_c
oredns-558bd4d5db-9tljg_kube-system_02063573-f956-476d-9bae-54c2abbf38ec_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-9tljg_kube-system_02063573-f956-476d-9bae-54c2abbf38ec_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/02063573-f956-476d-9bae-54c2abbf38ec/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/02063573-f956-476d-9bae-54c2abbf38ec/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/
var/lib/kubelet/pods/02063573-f956-476d-9bae-54c2abbf38ec/containers/coredns/e365b713\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/02063573-f956-476d-9bae-54c2abbf38ec/volumes/kubernetes.io~projected/kube-api-access-qcx65\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-9tljg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"02063573-f956-476d-9bae-54c2abbf38ec","kubernetes.io/config.seen":"2021-08-10T22:53:20.661725080Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012","pid":2155,"status":"running","bundle":"/run/containers/storage/overlay-containers/3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012/userdata","r
ootfs":"/var/lib/containers/storage/overlay/c7204eaed3f2ee1ea1be78222cfd85e4a48f2a6afd897a344e8743249f0ddf50/merged","created":"2021-08-10T22:53:21.181234009Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2f5a01da","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2f5a01da\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:53:20.882613494Z","io.kubernetes.cri-o.Imag
e":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-w546v\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8a257e43-80ad-47fa-a3e7-75ebf16ad3a7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-w546v_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c7204eaed3f2ee1ea1be78222cfd85e4a48f2a6afd897a344e8743249f0ddf50/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-w546v_kube-system_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/898bd0d9b428e8c8b
aa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-w546v_kube-system_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8a257e43-80ad-47fa-a3e7-75ebf16ad3a7/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8a257e43-80ad-47fa-a3e7-75ebf16ad3a7/containers/kindnet-cni/ba3af87a\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni
/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/8a257e43-80ad-47fa-a3e7-75ebf16ad3a7/volumes/kubernetes.io~projected/kube-api-access-5s4mm\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-w546v","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8a257e43-80ad-47fa-a3e7-75ebf16ad3a7","kubernetes.io/config.seen":"2021-08-10T22:53:19.715966627Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f","pid":3686,"status":"running","bundle":"/run/containers/storage/overlay-containers/37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f/userdata","rootfs":"/var/lib/containers/storage/overlay/33909dd580d726b7ed8aa0cde9acb10e765dda825095ad
bfb3085d8b028e9dc1/merged","created":"2021-08-10T22:54:23.417323054Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"ser
viceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-10T22:54:23.009652002Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_2c07bcdc-3e04-4897-98d3-e3c2e9120858_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:54:23.325873715Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernete
s.pod.uid\":\"2c07bcdc-3e04-4897-98d3-e3c2e9120858\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_2c07bcdc-3e04-4897-98d3-e3c2e9120858/37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"2c07bcdc-3e04-4897-98d3-e3c2e9120858\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/33909dd580d726b7ed8aa0cde9acb10e765dda825095adbfb3085d8b028e9dc1/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_2c07bcdc-3e04-4897-98d3-e3c2e9120858_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io
.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2c07bcdc-3e04-4897-98d3-e3c2e9120858","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"co
ntainers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-10T22:54:23.009652002Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc","pid":1334,"status":"running","bundle":"/run/containers/storage/overlay-containers/46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc/userdata","rootfs":"/var/lib/containers/storage/overlay/23b22baf4c4f9210cce0dbf790b17ed3874a7d85b083df28ce48f510fbfdb9b5/merged","created":"2021-08-10T22:52:52.742040682Z","annotations":{"io.container.m
anager":"cri-o","io.kubernetes.container.hash":"4d902deb","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4d902deb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:52:52.465668139Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa851
2a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210810225233-345780\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"160ac3a3a89c5d2b6c0448032f32313f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210810225233-345780_160ac3a3a89c5d2b6c0448032f32313f/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/23b22baf4c4f9210cce0dbf790b17ed3874a7d85b083df28ce48f510fbfdb9b5/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210810225233-345780_kube-system_160ac3a3a89c5d2b6c0448032f32313f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe","io.kubernetes.cri-o.SandboxName
":"k8s_etcd-pause-20210810225233-345780_kube-system_160ac3a3a89c5d2b6c0448032f32313f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/160ac3a3a89c5d2b6c0448032f32313f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/160ac3a3a89c5d2b6c0448032f32313f/containers/etcd/dba3b2bd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"160ac3a3a89c5d2b6c0448032f32313f","kubeadm
.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"160ac3a3a89c5d2b6c0448032f32313f","kubernetes.io/config.seen":"2021-08-10T22:52:49.970645682Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714","pid":1344,"status":"running","bundle":"/run/containers/storage/overlay-containers/57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714/userdata","rootfs":"/var/lib/containers/storage/overlay/18c3c37de94162b66dd4c899b0339d2c486b939eb75ba90c0b11cd43fd72b76b/merged","created":"2021-08-10T22:52:52.74118409Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9336f224","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/
termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9336f224\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:52:52.460877845Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-p
ause-20210810225233-345780\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"94030c587273fafbeefd24272ee4f17c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210810225233-345780_94030c587273fafbeefd24272ee4f17c/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/18c3c37de94162b66dd4c899b0339d2c486b939eb75ba90c0b11cd43fd72b76b/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210810225233-345780_kube-system_94030c587273fafbeefd24272ee4f17c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-202108
10225233-345780_kube-system_94030c587273fafbeefd24272ee4f17c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/94030c587273fafbeefd24272ee4f17c/containers/kube-controller-manager/3c1a15d6\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/94030c587273fafbeefd24272ee4f17c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"c
ontainer_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"94030c587273fafbeefd24272ee4f17c","kubernetes.io/config.hash":"94030c587273fafbeefd24272ee4f17c","kubernetes.io/config.seen":"2021-08-10T22:52:49.970622629Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234","pi
d":2059,"status":"running","bundle":"/run/containers/storage/overlay-containers/898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234/userdata","rootfs":"/var/lib/containers/storage/overlay/6b0d36271a3c7127e0fd8a1a6159ddb2e537227c7bff0e91a32a760960cf6307/merged","created":"2021-08-10T22:53:20.75716168Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:53:19.715966627Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-w546v_kube-system_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:53:20.63478181Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kub
ernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-w546v","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"694b6fb659\",\"app\":\"kindnet\",\"io.kubernetes.pod.uid\":\"8a257e43-80ad-47fa-a3e7-75ebf16ad3a7\",\"tier\":\"node\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kindnet\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kindnet-w546v\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-w546v_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7/898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-w546v\",\"uid\":\"8a257e43-80ad-47fa-a3e7-75ebf16ad3a7\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/v
ar/lib/containers/storage/overlay/6b0d36271a3c7127e0fd8a1a6159ddb2e537227c7bff0e91a32a760960cf6307/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-w546v_kube-system_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234/userdata/shm","io.kubernetes.pod.name":"kindnet-w546v","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.
uid":"8a257e43-80ad-47fa-a3e7-75ebf16ad3a7","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-10T22:53:19.715966627Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2","pid":1198,"status":"running","bundle":"/run/containers/storage/overlay-containers/94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2/userdata","rootfs":"/var/lib/containers/storage/overlay/86563bdc30f9f8bd8041deec731973dcb246c65795ba7ef51b1b0306a9b51dd3/merged","created":"2021-08-10T22:52:51.509364476Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:52:49.970622629Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"94030c587273fafbeefd24272ee4f17
c\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210810225233-345780_kube-system_94030c587273fafbeefd24272ee4f17c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:52:51.35481879Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210810225233-345780","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"94030c587273fafbeefd24272ee4f17c\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controll
er-manager-pause-20210810225233-345780\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210810225233-345780_94030c587273fafbeefd24272ee4f17c/94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210810225233-345780\",\"uid\":\"94030c587273fafbeefd24272ee4f17c\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/86563bdc30f9f8bd8041deec731973dcb246c65795ba7ef51b1b0306a9b51dd3/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210810225233-345780_kube-system_94030c587273fafbeefd24272ee4f17c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/
storage/overlay-containers/94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"94030c587273fafbeefd24272ee4f17c","kubernetes.io/config.hash":"94030c587273fafbeefd24272ee4f17c","kubernetes.io/config.seen":"2021-08-10T22:52:49.970622629Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181","pid":2811,"status":
"running","bundle":"/run/containers/storage/overlay-containers/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181/userdata","rootfs":"/var/lib/containers/storage/overlay/055cc6e7a8b9cc9e86f953d7a00fb748acf093f5fa40c1cb684434f1ea2517c6/merged","created":"2021-08-10T22:54:15.545366086Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:53:20.661725080Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"vethb1cc53e2\",\"mac\":\"f2:88:5f:1d:0a:cf\"},{\"name\":\"eth0\",\"mac\":\"1e:c6:5b:eb:c6:d4\",\"sandbox\":\"/var/run/netns/de69ec9c-b650-4219-9b68-b3b9062cf15a\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"97e17aed56acdaa0e3ff
90b3dea55cffd35d24451d900582ae64379d0ea18181","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-9tljg_kube-system_02063573-f956-476d-9bae-54c2abbf38ec_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:54:15.385490254Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-9tljg","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-9tljg","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"02063573-f956-476d-9bae-54c2abbf38ec\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-9tljg\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558
bd4d5db-9tljg_02063573-f956-476d-9bae-54c2abbf38ec/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-9tljg\",\"uid\":\"02063573-f956-476d-9bae-54c2abbf38ec\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/055cc6e7a8b9cc9e86f953d7a00fb748acf093f5fa40c1cb684434f1ea2517c6/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-9tljg_kube-system_02063573-f956-476d-9bae-54c2abbf38ec_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181","io.kuber
netes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-9tljg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"02063573-f956-476d-9bae-54c2abbf38ec","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-10T22:53:20.661725080Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316","pid":1321,"status":"running","bundle":"/run/containers/storage/overlay-containers/a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316/userdata","rootfs":"/var/lib/containers/storage/overlay/0dac4eec877360c2cbb458d7fca6800c7f1c9af4c3a68d2b1e8d31929873af77/merged","created":"2021-08-10T22:52:52.741248056Z","annotations":{"io
.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:52:52.445248353Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1
302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210810225233-345780\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"065b8ee5be44c8bb759a20e5e68abf58\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210810225233-345780_065b8ee5be44c8bb759a20e5e68abf58/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0dac4eec877360c2cbb458d7fca6800c7f1c9af4c3a68d2b1e8d31929873af77/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210810225233-345780_kube-system_065b8ee5be44c8bb759a20e5e68abf58_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":
"e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210810225233-345780_kube-system_065b8ee5be44c8bb759a20e5e68abf58_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/065b8ee5be44c8bb759a20e5e68abf58/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/065b8ee5be44c8bb759a20e5e68abf58/containers/kube-scheduler/fc927cdf\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"065b8ee5be44c8bb75
9a20e5e68abf58","kubernetes.io/config.hash":"065b8ee5be44c8bb759a20e5e68abf58","kubernetes.io/config.seen":"2021-08-10T22:52:49.970643623Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75","pid":1186,"status":"running","bundle":"/run/containers/storage/overlay-containers/a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75/userdata","rootfs":"/var/lib/containers/storage/overlay/4051995db417a9d6778d389a39ea41c261ccd9b7dbb2dd7a9b84b750e196309d/merged","created":"2021-08-10T22:52:51.509346737Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:52:49.970647167Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b674a9b0919
cd96b03ee6b9415bb734d\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.49.2:8443\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210810225233-345780_kube-system_b674a9b0919cd96b03ee6b9415bb734d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:52:51.351867951Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210810225233-345780","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-2021081022
5233-345780\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"b674a9b0919cd96b03ee6b9415bb734d\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210810225233-345780_b674a9b0919cd96b03ee6b9415bb734d/a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210810225233-345780\",\"uid\":\"b674a9b0919cd96b03ee6b9415bb734d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4051995db417a9d6778d389a39ea41c261ccd9b7dbb2dd7a9b84b750e196309d/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210810225233-345780_kube-system_b674a9b0919cd96b03ee6b9415bb734d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","i
o.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b674a9b0919cd96b03ee6b9415bb734d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"b674a9b0919cd96b03ee6b9415bb734d","kubernetes.io/config.seen":"2021-08-10T22:52:49.970647167Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"
root"},{"ociVersion":"1.0.2-dev","id":"b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f","pid":2066,"status":"running","bundle":"/run/containers/storage/overlay-containers/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f/userdata","rootfs":"/var/lib/containers/storage/overlay/27e60d11d9de46b808d545cdacf425a10890a0aba8e7937f529d9a414ddd360f/merged","created":"2021-08-10T22:53:20.757446716Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:53:19.710142604Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-pwcm8_kube-system_6dee8a63-1575-445b-9978-c72ad86a1d79_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"202
1-08-10T22:53:20.632348013Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-pwcm8","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"7cdcb64568\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"6dee8a63-1575-445b-9978-c72ad86a1d79\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-pwcm8\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pwcm8_6dee8a63-1575-445b-9978-c72ad86a1d79/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-pwcm8\",\"uid\":\"6dee8a63-1575-445b-9978-c72ad86a1
d79\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/27e60d11d9de46b808d545cdacf425a10890a0aba8e7937f529d9a414ddd360f/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-pwcm8_kube-system_6dee8a63-1575-445b-9978-c72ad86a1d79_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f/userdata/shm","io.kubernetes.pod.name":"k
ube-proxy-pwcm8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"6dee8a63-1575-445b-9978-c72ad86a1d79","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-10T22:53:19.710142604Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba","pid":1176,"status":"running","bundle":"/run/containers/storage/overlay-containers/e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba/userdata","rootfs":"/var/lib/containers/storage/overlay/162f110630c0c13f4030a81bcf015a4bc06e9a0e6cba243e21a8edf0cf6f43be/merged","created":"2021-08-10T22:52:51.509378429Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"065b8ee5be44c8bb759a20e5e68abf58\",\
"kubernetes.io/config.seen\":\"2021-08-10T22:52:49.970643623Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210810225233-345780_kube-system_065b8ee5be44c8bb759a20e5e68abf58_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:52:51.344180698Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210810225233-345780","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"065b8ee5be44c8bb759a20
e5e68abf58\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210810225233-345780\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210810225233-345780_065b8ee5be44c8bb759a20e5e68abf58/e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210810225233-345780\",\"uid\":\"065b8ee5be44c8bb759a20e5e68abf58\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/162f110630c0c13f4030a81bcf015a4bc06e9a0e6cba243e21a8edf0cf6f43be/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210810225233-345780_kube-system_065b8ee5be44c8bb759a20e5e68abf58_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/st
orage/overlay-containers/e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"065b8ee5be44c8bb759a20e5e68abf58","kubernetes.io/config.hash":"065b8ee5be44c8bb759a20e5e68abf58","kubernetes.io/config.seen":"2021-08-10T22:52:49.970643623Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0810 22:54:32.071111  539635 cri.go:113] list returned 16 containers
	I0810 22:54:32.071130  539635 cri.go:116] container: {ID:15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356 Status:running}
	I0810 22:54:32.071184  539635 cri.go:116] container: {ID:1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe Status:running}
	I0810 22:54:32.071189  539635 cri.go:118] skipping 1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe - not in ps
	I0810 22:54:32.071193  539635 cri.go:116] container: {ID:22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32 Status:running}
	I0810 22:54:32.071200  539635 cri.go:116] container: {ID:26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d Status:running}
	I0810 22:54:32.071207  539635 cri.go:116] container: {ID:27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0 Status:running}
	I0810 22:54:32.071211  539635 cri.go:116] container: {ID:3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012 Status:running}
	I0810 22:54:32.071219  539635 cri.go:116] container: {ID:37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f Status:running}
	I0810 22:54:32.071223  539635 cri.go:118] skipping 37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f - not in ps
	I0810 22:54:32.071229  539635 cri.go:116] container: {ID:46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc Status:running}
	I0810 22:54:32.071233  539635 cri.go:116] container: {ID:57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714 Status:running}
	I0810 22:54:32.071241  539635 cri.go:116] container: {ID:898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234 Status:running}
	I0810 22:54:32.071249  539635 cri.go:118] skipping 898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234 - not in ps
	I0810 22:54:32.071253  539635 cri.go:116] container: {ID:94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2 Status:running}
	I0810 22:54:32.071257  539635 cri.go:118] skipping 94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2 - not in ps
	I0810 22:54:32.071268  539635 cri.go:116] container: {ID:97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181 Status:running}
	I0810 22:54:32.071275  539635 cri.go:118] skipping 97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181 - not in ps
	I0810 22:54:32.071278  539635 cri.go:116] container: {ID:a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316 Status:running}
	I0810 22:54:32.071283  539635 cri.go:116] container: {ID:a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75 Status:running}
	I0810 22:54:32.071287  539635 cri.go:118] skipping a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75 - not in ps
	I0810 22:54:32.071291  539635 cri.go:116] container: {ID:b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f Status:running}
	I0810 22:54:32.071295  539635 cri.go:118] skipping b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f - not in ps
	I0810 22:54:32.071298  539635 cri.go:116] container: {ID:e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba Status:running}
	I0810 22:54:32.071306  539635 cri.go:118] skipping e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba - not in ps
	I0810 22:54:32.071342  539635 ssh_runner.go:149] Run: sudo runc pause 15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356
	I0810 22:54:32.089647  539635 ssh_runner.go:149] Run: sudo runc pause 22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32
	I0810 22:54:32.106869  539635 ssh_runner.go:149] Run: sudo runc pause 26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d
	I0810 22:54:32.123956  539635 ssh_runner.go:149] Run: sudo runc pause 27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0
	I0810 22:54:32.141698  539635 ssh_runner.go:149] Run: sudo runc pause 3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012
	I0810 22:54:32.158944  539635 ssh_runner.go:149] Run: sudo runc pause 46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc
	I0810 22:54:35.205510  539635 out.go:177] 
	W0810 22:54:35.205793  539635 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc pause 46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-10T22:54:32Z" level=error msg="unable to freeze"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc pause 46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-10T22:54:32Z" level=error msg="unable to freeze"
	
	W0810 22:54:35.205818  539635 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0810 22:54:35.354216  539635 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0810 22:54:35.356008  539635 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210810225233-345780 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210810225233-345780
helpers_test.go:236: (dbg) docker inspect pause-20210810225233-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8",
	        "Created": "2021-08-10T22:52:34.698259045Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515550,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-10T22:52:35.182460391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8/hostname",
	        "HostsPath": "/var/lib/docker/containers/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8/hosts",
	        "LogPath": "/var/lib/docker/containers/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8-json.log",
	        "Name": "/pause-20210810225233-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210810225233-345780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210810225233-345780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cd2fd6f184e67618f831d8314c23e4cf96be00c8a82cf64a9b5e6394f60054e2-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd2fd6f184e67618f831d8314c23e4cf96be00c8a82cf64a9b5e6394f60054e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd2fd6f184e67618f831d8314c23e4cf96be00c8a82cf64a9b5e6394f60054e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd2fd6f184e67618f831d8314c23e4cf96be00c8a82cf64a9b5e6394f60054e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210810225233-345780",
	                "Source": "/var/lib/docker/volumes/pause-20210810225233-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210810225233-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210810225233-345780",
	                "name.minikube.sigs.k8s.io": "pause-20210810225233-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f404af36dee4eaa00a4f1a79dba38668589f298b9dea0f5fe10c13defbea12c9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f404af36dee4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210810225233-345780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f944c8ee123a"
	                    ],
	                    "NetworkID": "51e1127032be582af01cfcb85b893562f9fc6c893e0e850dd2e1e3269326ab00",
	                    "EndpointID": "414033aeecc037bf5562dc276192e2977befd5451fdd0adc84be4523c20d6a3a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210810225233-345780 -n pause-20210810225233-345780
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210810225233-345780 -n pause-20210810225233-345780: exit status 2 (340.84232ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210810225233-345780 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210810225233-345780 logs -n 25: exit status 110 (11.0277219s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                    Args                    |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | test-preload-20210810224612-345780         | test-preload-20210810224612-345780         | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:48:34 UTC | Tue, 10 Aug 2021 22:48:35 UTC |
	|         | logs -n 25                                 |                                            |         |         |                               |                               |
	| delete  | -p                                         | test-preload-20210810224612-345780         | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:48:36 UTC | Tue, 10 Aug 2021 22:48:40 UTC |
	|         | test-preload-20210810224612-345780         |                                            |         |         |                               |                               |
	| start   | -p                                         | scheduled-stop-20210810224840-345780       | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:48:40 UTC | Tue, 10 Aug 2021 22:49:07 UTC |
	|         | scheduled-stop-20210810224840-345780       |                                            |         |         |                               |                               |
	|         | --memory=2048 --driver=docker              |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210810224840-345780       | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:08 UTC | Tue, 10 Aug 2021 22:49:08 UTC |
	|         | scheduled-stop-20210810224840-345780       |                                            |         |         |                               |                               |
	|         | --cancel-scheduled                         |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210810224840-345780       | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:21 UTC | Tue, 10 Aug 2021 22:49:38 UTC |
	|         | scheduled-stop-20210810224840-345780       |                                            |         |         |                               |                               |
	|         | --schedule 5s                              |                                            |         |         |                               |                               |
	| delete  | -p                                         | scheduled-stop-20210810224840-345780       | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:38 UTC | Tue, 10 Aug 2021 22:49:43 UTC |
	|         | scheduled-stop-20210810224840-345780       |                                            |         |         |                               |                               |
	| delete  | -p                                         | insufficient-storage-20210810224943-345780 | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:51 UTC | Tue, 10 Aug 2021 22:49:57 UTC |
	|         | insufficient-storage-20210810224943-345780 |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210810224957-345780   | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:57 UTC | Tue, 10 Aug 2021 22:50:53 UTC |
	|         | kubernetes-upgrade-20210810224957-345780   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0               |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p                                         | offline-crio-20210810224957-345780         | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:57 UTC | Tue, 10 Aug 2021 22:51:43 UTC |
	|         | offline-crio-20210810224957-345780         |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                     |                                            |         |         |                               |                               |
	|         | --memory=2048 --wait=true                  |                                            |         |         |                               |                               |
	|         | --driver=docker                            |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | offline-crio-20210810224957-345780         | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:51:44 UTC | Tue, 10 Aug 2021 22:51:47 UTC |
	|         | offline-crio-20210810224957-345780         |                                            |         |         |                               |                               |
	| delete  | -p                                         | running-upgrade-20210810224957-345780      | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:28 UTC | Tue, 10 Aug 2021 22:52:31 UTC |
	|         | running-upgrade-20210810224957-345780      |                                            |         |         |                               |                               |
	| delete  | -p                                         | stopped-upgrade-20210810224957-345780      | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:45 UTC | Tue, 10 Aug 2021 22:52:48 UTC |
	|         | stopped-upgrade-20210810224957-345780      |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubenet-20210810225248-345780              | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:48 UTC | Tue, 10 Aug 2021 22:52:48 UTC |
	|         | kubenet-20210810225248-345780              |                                            |         |         |                               |                               |
	| delete  | -p                                         | flannel-20210810225248-345780              | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:48 UTC | Tue, 10 Aug 2021 22:52:49 UTC |
	|         | flannel-20210810225248-345780              |                                            |         |         |                               |                               |
	| delete  | -p false-20210810225249-345780             | false-20210810225249-345780                | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:49 UTC | Tue, 10 Aug 2021 22:52:49 UTC |
	| start   | -p                                         | force-systemd-flag-20210810225249-345780   | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:49 UTC | Tue, 10 Aug 2021 22:53:34 UTC |
	|         | force-systemd-flag-20210810225249-345780   |                                            |         |         |                               |                               |
	|         | --memory=2048 --force-systemd              |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-flag-20210810225249-345780   | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:53:34 UTC | Tue, 10 Aug 2021 22:53:37 UTC |
	|         | force-systemd-flag-20210810225249-345780   |                                            |         |         |                               |                               |
	| start   | -p                                         | missing-upgrade-20210810225147-345780      | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:53:06 UTC | Tue, 10 Aug 2021 22:53:54 UTC |
	|         | missing-upgrade-20210810225147-345780      |                                            |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | missing-upgrade-20210810225147-345780      | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:53:54 UTC | Tue, 10 Aug 2021 22:53:57 UTC |
	|         | missing-upgrade-20210810225147-345780      |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-env-20210810225337-345780    | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:53:37 UTC | Tue, 10 Aug 2021 22:54:11 UTC |
	|         | force-systemd-env-20210810225337-345780    |                                            |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=5 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-env-20210810225337-345780    | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:54:13 UTC | Tue, 10 Aug 2021 22:54:17 UTC |
	|         | force-systemd-env-20210810225337-345780    |                                            |         |         |                               |                               |
	| start   | -p pause-20210810225233-345780             | pause-20210810225233-345780                | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:33 UTC | Tue, 10 Aug 2021 22:54:17 UTC |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --install-addons=false                     |                                            |         |         |                               |                               |
	|         | --wait=all --driver=docker                 |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p pause-20210810225233-345780             | pause-20210810225233-345780                | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:54:18 UTC | Tue, 10 Aug 2021 22:54:24 UTC |
	|         | --alsologtostderr                          |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| pause   | -p pause-20210810225233-345780             | pause-20210810225233-345780                | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:54:24 UTC | Tue, 10 Aug 2021 22:54:24 UTC |
	|         | --alsologtostderr -v=5                     |                                            |         |         |                               |                               |
	| unpause | -p pause-20210810225233-345780             | pause-20210810225233-345780                | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:54:25 UTC | Tue, 10 Aug 2021 22:54:25 UTC |
	|         | --alsologtostderr -v=5                     |                                            |         |         |                               |                               |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:54:18
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:54:18.031304  536245 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:54:18.031415  536245 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:54:18.031444  536245 out.go:311] Setting ErrFile to fd 2...
	I0810 22:54:18.031449  536245 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:54:18.031629  536245 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:54:18.031947  536245 out.go:305] Setting JSON to false
	I0810 22:54:18.073571  536245 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":9420,"bootTime":1628626639,"procs":267,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:54:18.073683  536245 start.go:121] virtualization: kvm guest
	I0810 22:54:18.078907  536245 out.go:177] * [pause-20210810225233-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:54:18.080567  536245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:54:18.079098  536245 notify.go:169] Checking for updates...
	I0810 22:54:18.082433  536245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:54:18.084035  536245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:54:17.970879  536187 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:54:17.974055  536187 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:54:18.036911  536187 docker.go:132] docker version: linux-19.03.15
	I0810 22:54:18.037054  536187 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:54:18.148022  536187 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:53 SystemTime:2021-08-10 22:54:18.081595263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:54:18.148117  536187 docker.go:244] overlay module found
	I0810 22:54:18.150428  536187 out.go:177] * Using the docker driver based on user configuration
	I0810 22:54:18.150464  536187 start.go:278] selected driver: docker
	I0810 22:54:18.150472  536187 start.go:751] validating driver "docker" against <nil>
	I0810 22:54:18.150498  536187 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0810 22:54:18.150556  536187 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:54:18.150576  536187 out.go:242] ! Your cgroup does not allow setting memory.
	I0810 22:54:18.152199  536187 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:54:18.154358  536187 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:54:18.273166  536187 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:53 SystemTime:2021-08-10 22:54:18.211910586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:54:18.273297  536187 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0810 22:54:18.273531  536187 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0810 22:54:18.273571  536187 cni.go:93] Creating CNI manager for ""
	I0810 22:54:18.273580  536187 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:54:18.273593  536187 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0810 22:54:18.273602  536187 start_flags.go:277] config:
	{Name:old-k8s-version-20210810225417-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210810225417-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:54:18.276122  536187 out.go:177] * Starting control plane node old-k8s-version-20210810225417-345780 in cluster old-k8s-version-20210810225417-345780
	I0810 22:54:18.276196  536187 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:54:18.085669  536245 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:54:18.086553  536245 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:54:18.161014  536245 docker.go:132] docker version: linux-19.03.15
	I0810 22:54:18.161114  536245 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:54:18.292216  536245 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:54 SystemTime:2021-08-10 22:54:18.225392106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:54:18.292371  536245 docker.go:244] overlay module found
	I0810 22:54:18.277967  536187 out.go:177] * Pulling base image ...
	I0810 22:54:18.278013  536187 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0810 22:54:18.278062  536187 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0810 22:54:18.278079  536187 cache.go:56] Caching tarball of preloaded images
	I0810 22:54:18.278111  536187 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0810 22:54:18.278356  536187 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 22:54:18.278376  536187 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on crio
	I0810 22:54:18.278524  536187 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/config.json ...
	I0810 22:54:18.278554  536187 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/config.json: {Name:mka3308dbcf2a0dc97f53c4b4881d3ffda8c2255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:18.395342  536187 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:54:18.395379  536187 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:54:18.395399  536187 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:54:18.395468  536187 start.go:313] acquiring machines lock for old-k8s-version-20210810225417-345780: {Name:mkc897cb2f1a3a018402055eba05786fd6d907e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:18.395635  536187 start.go:317] acquired machines lock for "old-k8s-version-20210810225417-345780" in 140.96µs
	I0810 22:54:18.395669  536187 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20210810225417-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210810225417-345780 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0810 22:54:18.395773  536187 start.go:126] createHost starting for "" (driver="docker")
	I0810 22:54:18.294696  536245 out.go:177] * Using the docker driver based on existing profile
	I0810 22:54:18.294738  536245 start.go:278] selected driver: docker
	I0810 22:54:18.294748  536245 start.go:751] validating driver "docker" against &{Name:pause-20210810225233-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210810225233-345780 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:54:18.294900  536245 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0810 22:54:18.295545  536245 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:54:18.397766  536245 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:55 SystemTime:2021-08-10 22:54:18.336810315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:54:18.399506  536245 cni.go:93] Creating CNI manager for ""
	I0810 22:54:18.399524  536245 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:54:18.399536  536245 start_flags.go:277] config:
	{Name:pause-20210810225233-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210810225233-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:54:18.402058  536245 out.go:177] * Starting control plane node pause-20210810225233-345780 in cluster pause-20210810225233-345780
	I0810 22:54:18.402108  536245 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:54:18.403663  536245 out.go:177] * Pulling base image ...
	I0810 22:54:18.403698  536245 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:54:18.403738  536245 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 22:54:18.403752  536245 cache.go:56] Caching tarball of preloaded images
	I0810 22:54:18.403752  536245 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0810 22:54:18.403986  536245 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 22:54:18.404005  536245 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0810 22:54:18.404138  536245 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/config.json ...
	I0810 22:54:18.538652  536245 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:54:18.538687  536245 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:54:18.538705  536245 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:54:18.538749  536245 start.go:313] acquiring machines lock for pause-20210810225233-345780: {Name:mk22511c8be5ecc020d71469147269149ddfd4e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:18.538876  536245 start.go:317] acquired machines lock for "pause-20210810225233-345780" in 79.441µs
	I0810 22:54:18.538901  536245 start.go:93] Skipping create...Using existing machine configuration
	I0810 22:54:18.538907  536245 fix.go:55] fixHost starting: 
	I0810 22:54:18.539145  536245 cli_runner.go:115] Run: docker container inspect pause-20210810225233-345780 --format={{.State.Status}}
	I0810 22:54:18.591804  536245 fix.go:108] recreateIfNeeded on pause-20210810225233-345780: state=Running err=<nil>
	W0810 22:54:18.591855  536245 fix.go:134] unexpected machine state, will restart: <nil>
	I0810 22:54:18.595311  536245 out.go:177] * Updating the running docker "pause-20210810225233-345780" container ...
	I0810 22:54:18.595349  536245 machine.go:88] provisioning docker machine ...
	I0810 22:54:18.595380  536245 ubuntu.go:169] provisioning hostname "pause-20210810225233-345780"
	I0810 22:54:18.595458  536245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210810225233-345780
	I0810 22:54:18.644673  536245 main.go:130] libmachine: Using SSH client type: native
	I0810 22:54:18.644896  536245 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0810 22:54:18.644914  536245 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210810225233-345780 && echo "pause-20210810225233-345780" | sudo tee /etc/hostname
	I0810 22:54:18.780330  536245 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210810225233-345780
	
	I0810 22:54:18.780416  536245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210810225233-345780
	I0810 22:54:18.828661  536245 main.go:130] libmachine: Using SSH client type: native
	I0810 22:54:18.828834  536245 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0810 22:54:18.828854  536245 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210810225233-345780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210810225233-345780/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210810225233-345780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:54:18.957695  536245 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:54:18.957727  536245 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:54:18.957763  536245 ubuntu.go:177] setting up certificates
	I0810 22:54:18.957776  536245 provision.go:83] configureAuth start
	I0810 22:54:18.957891  536245 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210810225233-345780
	I0810 22:54:19.018825  536245 provision.go:137] copyHostCerts
	I0810 22:54:19.018899  536245 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:54:19.018927  536245 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:54:19.018983  536245 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:54:19.019105  536245 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:54:19.019121  536245 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:54:19.019145  536245 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:54:19.019245  536245 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:54:19.019256  536245 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:54:19.019280  536245 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:54:19.019351  536245 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.pause-20210810225233-345780 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210810225233-345780]
	I0810 22:54:19.302851  536245 provision.go:171] copyRemoteCerts
	I0810 22:54:19.302924  536245 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:54:19.302995  536245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210810225233-345780
	I0810 22:54:19.348000  536245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/pause-20210810225233-345780/id_rsa Username:docker}
	I0810 22:54:19.454000  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0810 22:54:19.507038  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0810 22:54:19.527293  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:54:19.545444  536245 provision.go:86] duration metric: configureAuth took 587.653319ms
	I0810 22:54:19.545473  536245 ubuntu.go:193] setting minikube options for container-runtime
	I0810 22:54:19.545744  536245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210810225233-345780
	I0810 22:54:19.603883  536245 main.go:130] libmachine: Using SSH client type: native
	I0810 22:54:19.604157  536245 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0810 22:54:19.604193  536245 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:54:20.335224  536245 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:54:20.335260  536245 machine.go:91] provisioned docker machine in 1.739902271s
	I0810 22:54:20.335275  536245 start.go:267] post-start starting for "pause-20210810225233-345780" (driver="docker")
	I0810 22:54:20.335284  536245 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:54:20.335354  536245 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:54:20.335407  536245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210810225233-345780
	I0810 22:54:20.392530  536245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/pause-20210810225233-345780/id_rsa Username:docker}
	I0810 22:54:20.486762  536245 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:54:20.490779  536245 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0810 22:54:20.490808  536245 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0810 22:54:20.490819  536245 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0810 22:54:20.490827  536245 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0810 22:54:20.490840  536245 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:54:20.490909  536245 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:54:20.491025  536245 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> 3457802.pem in /etc/ssl/certs
	I0810 22:54:20.491164  536245 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:54:20.499571  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:54:20.531877  536245 start.go:270] post-start completed in 196.582254ms
	I0810 22:54:20.531981  536245 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:54:20.532039  536245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210810225233-345780
	I0810 22:54:20.585528  536245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/pause-20210810225233-345780/id_rsa Username:docker}
	I0810 22:54:20.675735  536245 fix.go:57] fixHost completed within 2.136816317s
	I0810 22:54:20.675767  536245 start.go:80] releasing machines lock for "pause-20210810225233-345780", held for 2.136876401s
	I0810 22:54:20.675868  536245 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210810225233-345780
	I0810 22:54:20.735979  536245 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:54:20.736015  536245 ssh_runner.go:149] Run: systemctl --version
	I0810 22:54:20.736069  536245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210810225233-345780
	I0810 22:54:20.736084  536245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210810225233-345780
	I0810 22:54:20.799995  536245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/pause-20210810225233-345780/id_rsa Username:docker}
	I0810 22:54:20.803604  536245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/pause-20210810225233-345780/id_rsa Username:docker}
	I0810 22:54:20.889954  536245 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:54:20.943164  536245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:54:20.957142  536245 docker.go:153] disabling docker service ...
	I0810 22:54:20.957208  536245 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:54:20.973570  536245 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:54:20.984897  536245 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:54:21.140122  536245 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:54:21.268145  536245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:54:21.279558  536245 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:54:21.294430  536245 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0810 22:54:21.304982  536245 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0810 22:54:21.305034  536245 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0810 22:54:21.314352  536245 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:54:21.322078  536245 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:54:21.322134  536245 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:54:21.330160  536245 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:54:21.337150  536245 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:54:21.483755  536245 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:54:21.493813  536245 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:54:21.493878  536245 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:54:21.497121  536245 start.go:417] Will wait 60s for crictl version
	I0810 22:54:21.497171  536245 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:54:21.528909  536245 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0810 22:54:21.529018  536245 ssh_runner.go:149] Run: crio --version
	I0810 22:54:21.614827  536245 ssh_runner.go:149] Run: crio --version
	I0810 22:54:21.694201  536245 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0810 22:54:21.694319  536245 cli_runner.go:115] Run: docker network inspect pause-20210810225233-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 22:54:21.739576  536245 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0810 22:54:21.743795  536245 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:54:21.743863  536245 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:54:21.776738  536245 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:54:21.776761  536245 crio.go:333] Images already preloaded, skipping extraction
	I0810 22:54:21.776806  536245 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:54:21.806274  536245 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:54:21.806299  536245 cache_images.go:74] Images are preloaded, skipping loading
	I0810 22:54:21.806367  536245 ssh_runner.go:149] Run: crio config
	I0810 22:54:21.887384  536245 cni.go:93] Creating CNI manager for ""
	I0810 22:54:21.887409  536245 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:54:21.887421  536245 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:54:21.887433  536245 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210810225233-345780 NodeName:pause-20210810225233-345780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:54:21.887592  536245 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210810225233-345780"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:54:21.887709  536245 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-20210810225233-345780 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210810225233-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:54:21.887772  536245 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0810 22:54:21.896452  536245 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 22:54:21.896525  536245 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0810 22:54:21.903941  536245 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (558 bytes)
	I0810 22:54:21.916552  536245 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:54:21.929158  536245 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I0810 22:54:21.942738  536245 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0810 22:54:21.945859  536245 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780 for IP: 192.168.49.2
	I0810 22:54:21.945915  536245 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:54:21.945942  536245 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:54:21.946010  536245 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/client.key
	I0810 22:54:21.946039  536245 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/apiserver.key.dd3b5fb2
	I0810 22:54:21.946063  536245 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/proxy-client.key
	I0810 22:54:21.946193  536245 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem (1338 bytes)
	W0810 22:54:21.946262  536245 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780_empty.pem, impossibly tiny 0 bytes
	I0810 22:54:21.946277  536245 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0810 22:54:21.946316  536245 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:54:21.946355  536245 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:54:21.946386  536245 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:54:21.946450  536245 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:54:21.947979  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0810 22:54:21.965003  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0810 22:54:21.982737  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0810 22:54:22.000289  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0810 22:54:22.018974  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:54:22.037661  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:54:22.055727  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:54:22.074589  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0810 22:54:22.095439  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem --> /usr/share/ca-certificates/345780.pem (1338 bytes)
	I0810 22:54:22.113942  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /usr/share/ca-certificates/3457802.pem (1708 bytes)
	I0810 22:54:22.131697  536245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:54:22.148253  536245 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0810 22:54:22.161523  536245 ssh_runner.go:149] Run: openssl version
	I0810 22:54:22.166798  536245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/345780.pem && ln -fs /usr/share/ca-certificates/345780.pem /etc/ssl/certs/345780.pem"
	I0810 22:54:22.174512  536245 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/345780.pem
	I0810 22:54:22.178146  536245 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:29 /usr/share/ca-certificates/345780.pem
	I0810 22:54:22.178204  536245 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/345780.pem
	I0810 22:54:22.183279  536245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/345780.pem /etc/ssl/certs/51391683.0"
	I0810 22:54:22.191382  536245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3457802.pem && ln -fs /usr/share/ca-certificates/3457802.pem /etc/ssl/certs/3457802.pem"
	I0810 22:54:22.200766  536245 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3457802.pem
	I0810 22:54:22.204651  536245 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:29 /usr/share/ca-certificates/3457802.pem
	I0810 22:54:22.204723  536245 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3457802.pem
	I0810 22:54:22.211196  536245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3457802.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 22:54:22.219001  536245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:54:22.228190  536245 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:54:22.231658  536245 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:54:22.231714  536245 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:54:22.236653  536245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:54:22.244740  536245 kubeadm.go:390] StartCluster: {Name:pause-20210810225233-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210810225233-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:54:22.244879  536245 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0810 22:54:22.244951  536245 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:54:22.271529  536245 cri.go:76] found id: "27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0"
	I0810 22:54:22.271564  536245 cri.go:76] found id: "3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012"
	I0810 22:54:22.271571  536245 cri.go:76] found id: "15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356"
	I0810 22:54:22.271578  536245 cri.go:76] found id: "46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc"
	I0810 22:54:22.271584  536245 cri.go:76] found id: "a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316"
	I0810 22:54:22.271590  536245 cri.go:76] found id: "22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32"
	I0810 22:54:22.271595  536245 cri.go:76] found id: "57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714"
	I0810 22:54:22.271601  536245 cri.go:76] found id: ""
	I0810 22:54:22.271654  536245 ssh_runner.go:149] Run: sudo runc list -f json
	I0810 22:54:22.309769  536245 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356","pid":2165,"status":"running","bundle":"/run/containers/storage/overlay-containers/15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356/userdata","rootfs":"/var/lib/containers/storage/overlay/071c4e44af528aeace7f08903f94db76d8f44aff75afa7a91f7d3b9856a0864e/merged","created":"2021-08-10T22:53:21.157739195Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"26776f60","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"26776f60\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termination
MessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:53:20.877927351Z","io.kubernetes.cri-o.Image":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.3","io.kubernetes.cri-o.ImageRef":"adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-pwcm8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6dee8a63-1575-445b-9978-c72ad86a1d79\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pwcm8_6dee8a63-1575-445b-9978-c72ad86a1d79/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/c
ontainers/storage/overlay/071c4e44af528aeace7f08903f94db76d8f44aff75afa7a91f7d3b9856a0864e/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-pwcm8_kube-system_6dee8a63-1575-445b-9978-c72ad86a1d79_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-pwcm8_kube-system_6dee8a63-1575-445b-9978-c72ad86a1d79_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pod
s/6dee8a63-1575-445b-9978-c72ad86a1d79/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6dee8a63-1575-445b-9978-c72ad86a1d79/containers/kube-proxy/5278d268\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/6dee8a63-1575-445b-9978-c72ad86a1d79/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/6dee8a63-1575-445b-9978-c72ad86a1d79/volumes/kubernetes.io~projected/kube-api-access-62t24\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-pwcm8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6dee8a63-1575-445b-9978-c72ad86a1d79","kubernetes.io/config.seen":"2021-08-10T22:53:19.710142604Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property
.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe","pid":1180,"status":"running","bundle":"/run/containers/storage/overlay-containers/1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe/userdata","rootfs":"/var/lib/containers/storage/overlay/5c0f9493127bf73e4dfba6fe76a3f2c264b05d896aac47faa68d474488dd2858/merged","created":"2021-08-10T22:52:51.517283387Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"160ac3a3a89c5d2b6c0448032f32313f\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.49.2:2379\",\"kubernetes.io/config.seen\":\"2021-08-10T22:52:49.970645682Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe","io.kubernetes.cri
-o.ContainerName":"k8s_POD_etcd-pause-20210810225233-345780_kube-system_160ac3a3a89c5d2b6c0448032f32313f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:52:51.357977899Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-pause-20210810225233-345780","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"160ac3a3a89c5d2b6c0448032f32313f\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210810225233-345780\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210810225233-345780_160ac3a3a89c5d2b6c0448032f32313f/1be
5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210810225233-345780\",\"uid\":\"160ac3a3a89c5d2b6c0448032f32313f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5c0f9493127bf73e4dfba6fe76a3f2c264b05d896aac47faa68d474488dd2858/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210810225233-345780_kube-system_160ac3a3a89c5d2b6c0448032f32313f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe","io.kubernetes.cri-o.SeccompProfilePath":"
runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"160ac3a3a89c5d2b6c0448032f32313f","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"160ac3a3a89c5d2b6c0448032f32313f","kubernetes.io/config.seen":"2021-08-10T22:52:49.970645682Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32","pid":1335,"status":"running","bundle":"/run/containers/storage/overlay-containers/22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32/userdata","rootfs":"/var/lib/containers/storage/overlay/e391f330ab4e80cdf4e04fe94b6cf5e402b735ee77b3ddfb6a0
639a4c8b9844c/merged","created":"2021-08-10T22:52:52.741191479Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8cf05ddb","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8cf05ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:52:52.453374204Z","io.kubernetes.cri-o.Image":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.I
mageName":"k8s.gcr.io/kube-apiserver:v1.21.3","io.kubernetes.cri-o.ImageRef":"3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210810225233-345780\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b674a9b0919cd96b03ee6b9415bb734d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210810225233-345780_b674a9b0919cd96b03ee6b9415bb734d/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e391f330ab4e80cdf4e04fe94b6cf5e402b735ee77b3ddfb6a0639a4c8b9844c/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210810225233-345780_kube-system_b674a9b0919cd96b03ee6b9415bb734d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a6593388812907c340a5fa020cc0b4facf
94f44d79e341784e598653c7ce7e75/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210810225233-345780_kube-system_b674a9b0919cd96b03ee6b9415bb734d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b674a9b0919cd96b03ee6b9415bb734d/containers/kube-apiserver/d568f8a1\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b674a9b0919cd96b03ee6b9415bb734d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/
ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b674a9b0919cd96b03ee6b9415bb734d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"b674a9b0919cd96b03ee6b9415bb734d","kubernetes.io/config.seen":"2021-08-10T22:52:49.970647167Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0","pid":2843,"status":"run
ning","bundle":"/run/containers/storage/overlay-containers/27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0/userdata","rootfs":"/var/lib/containers/storage/overlay/d7591e1606bf21649f0fedd57d3912d3415f751c2a39e49367c56c46960a73b8/merged","created":"2021-08-10T22:54:15.733268009Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e46eedbb","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e46eedbb\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\
"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:54:15.601252913Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.po
d.name\":\"coredns-558bd4d5db-9tljg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"02063573-f956-476d-9bae-54c2abbf38ec\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-9tljg_02063573-f956-476d-9bae-54c2abbf38ec/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d7591e1606bf21649f0fedd57d3912d3415f751c2a39e49367c56c46960a73b8/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-9tljg_kube-system_02063573-f956-476d-9bae-54c2abbf38ec_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-9tljg_kube-system_02063573-f956-476d-9bae-54c2abbf38ec_0","io.kubernetes.cri-o.SeccompProfilePath"
:"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/02063573-f956-476d-9bae-54c2abbf38ec/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/02063573-f956-476d-9bae-54c2abbf38ec/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/02063573-f956-476d-9bae-54c2abbf38ec/containers/coredns/e365b713\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/02063573-f956-476d-9bae-54c2abbf38ec/volumes/kubernetes.io~projected/kube-api-access-qcx65\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-9tljg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.ui
d":"02063573-f956-476d-9bae-54c2abbf38ec","kubernetes.io/config.seen":"2021-08-10T22:53:20.661725080Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012","pid":2155,"status":"running","bundle":"/run/containers/storage/overlay-containers/3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012/userdata","rootfs":"/var/lib/containers/storage/overlay/c7204eaed3f2ee1ea1be78222cfd85e4a48f2a6afd897a344e8743249f0ddf50/merged","created":"2021-08-10T22:53:21.181234009Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2f5a01da","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotati
ons":"{\"io.kubernetes.container.hash\":\"2f5a01da\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:53:20.882613494Z","io.kubernetes.cri-o.Image":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-w546v\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8a257e43-80ad-47fa-a3e7-75ebf16ad3a7\"}","io.k
ubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-w546v_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c7204eaed3f2ee1ea1be78222cfd85e4a48f2a6afd897a344e8743249f0ddf50/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-w546v_kube-system_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-w546v_kube-system_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.
lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8a257e43-80ad-47fa-a3e7-75ebf16ad3a7/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8a257e43-80ad-47fa-a3e7-75ebf16ad3a7/containers/kindnet-cni/ba3af87a\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/8a257e43-80ad-47fa-a3e7-75ebf16ad3a7/volumes/kubernetes.io~projected/kube-api-access-5s4mm\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-w546v","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8a257e43-80ad-47fa-a3e7-75ebf16ad3a7","kubernetes.io/config.seen":"2021-08-10T22:53:19
.715966627Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc","pid":1334,"status":"running","bundle":"/run/containers/storage/overlay-containers/46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc/userdata","rootfs":"/var/lib/containers/storage/overlay/23b22baf4c4f9210cce0dbf790b17ed3874a7d85b083df28ce48f510fbfdb9b5/merged","created":"2021-08-10T22:52:52.742040682Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4d902deb","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4d902deb\",\"io.kubernetes.container.restartCount\":\"0
\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:52:52.465668139Z","io.kubernetes.cri-o.Image":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210810225233-345780\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"160ac3a3a89c5d2b6c0448032f32313f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210810225233-345780_160ac3a3a89c5d2b6c0448032
f32313f/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/23b22baf4c4f9210cce0dbf790b17ed3874a7d85b083df28ce48f510fbfdb9b5/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210810225233-345780_kube-system_160ac3a3a89c5d2b6c0448032f32313f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210810225233-345780_kube-system_160ac3a3a89c5d2b6c0448032f32313f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/160ac3a3a89c5d2b6c0448032f32313f/etc-hosts\",\"r
eadonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/160ac3a3a89c5d2b6c0448032f32313f/containers/etcd/dba3b2bd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"160ac3a3a89c5d2b6c0448032f32313f","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"160ac3a3a89c5d2b6c0448032f32313f","kubernetes.io/config.seen":"2021-08-10T22:52:49.970645682Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":
"57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714","pid":1344,"status":"running","bundle":"/run/containers/storage/overlay-containers/57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714/userdata","rootfs":"/var/lib/containers/storage/overlay/18c3c37de94162b66dd4c899b0339d2c486b939eb75ba90c0b11cd43fd72b76b/merged","created":"2021-08-10T22:52:52.74118409Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9336f224","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9336f224\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}"
,"io.kubernetes.cri-o.ContainerID":"57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:52:52.460877845Z","io.kubernetes.cri-o.Image":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.3","io.kubernetes.cri-o.ImageRef":"bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210810225233-345780\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"94030c587273fafbeefd24272ee4f17c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210810225233-345780_94030c587273fafbeefd24272ee4f17c/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/18c3c37de94162b66dd4c899b0339d2c486b939eb75ba90c0b11cd43fd72b76b/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210810225233-345780_kube-system_94030c587273fafbeefd24272ee4f17c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210810225233-345780_kube-system_94030c587273fafbeefd24272ee4f17c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log
\",\"host_path\":\"/var/lib/kubelet/pods/94030c587273fafbeefd24272ee4f17c/containers/kube-controller-manager/3c1a15d6\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/94030c587273fafbeefd24272ee4f17c/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":fals
e}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"94030c587273fafbeefd24272ee4f17c","kubernetes.io/config.hash":"94030c587273fafbeefd24272ee4f17c","kubernetes.io/config.seen":"2021-08-10T22:52:49.970622629Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234","pid":2059,"status":"running","bundle":"/run/containers/storage/overlay-containers/898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234/userdata","rootfs":"/var/lib/containers/storage/overlay/6b0d36271a3c7127e0fd8a1a6159ddb2e537227c7bff0e91a32a760960cf6307/merged","created":"2021-08-10T22:53:20.75716168Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.cont
ainer.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:53:19.715966627Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-w546v_kube-system_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:53:20.63478181Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-w546v","io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"694b6fb659\",\"app\":\"kindnet\",\"io.
kubernetes.pod.uid\":\"8a257e43-80ad-47fa-a3e7-75ebf16ad3a7\",\"tier\":\"node\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kindnet\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kindnet-w546v\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-w546v_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7/898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-w546v\",\"uid\":\"8a257e43-80ad-47fa-a3e7-75ebf16ad3a7\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6b0d36271a3c7127e0fd8a1a6159ddb2e537227c7bff0e91a32a760960cf6307/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-w546v_kube-system_8a257e43-80ad-47fa-a3e7-75ebf16ad3a7_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntim
e":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234/userdata/shm","io.kubernetes.pod.name":"kindnet-w546v","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"8a257e43-80ad-47fa-a3e7-75ebf16ad3a7","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-10T22:53:19.715966627Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2","pid":1198,"statu
s":"running","bundle":"/run/containers/storage/overlay-containers/94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2/userdata","rootfs":"/var/lib/containers/storage/overlay/86563bdc30f9f8bd8041deec731973dcb246c65795ba7ef51b1b0306a9b51dd3/merged","created":"2021-08-10T22:52:51.509364476Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:52:49.970622629Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"94030c587273fafbeefd24272ee4f17c\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210810225233-345780_kube-system_94030c587273fafbeefd24272ee4f17c_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:52:51.35481879
Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210810225233-345780","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"94030c587273fafbeefd24272ee4f17c\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210810225233-345780\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210810225233-345780_94030c587273fafbeefd24272ee4f17c/94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pau
se-20210810225233-345780\",\"uid\":\"94030c587273fafbeefd24272ee4f17c\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/86563bdc30f9f8bd8041deec731973dcb246c65795ba7ef51b1b0306a9b51dd3/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210810225233-345780_kube-system_94030c587273fafbeefd24272ee4f17c_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/94f10dd30
6eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"94030c587273fafbeefd24272ee4f17c","kubernetes.io/config.hash":"94030c587273fafbeefd24272ee4f17c","kubernetes.io/config.seen":"2021-08-10T22:52:49.970622629Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181","pid":2811,"status":"running","bundle":"/run/containers/storage/overlay-containers/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181/userdata","rootfs":"/var/lib/containers/storage/overlay/055cc6e7a8b9cc9e86f953d7a00fb748acf093f5fa40c1cb684434f1ea2517c6/merged","created":"2021-08-10T22:54:15.545366086Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernet
es.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:53:20.661725080Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"vethb1cc53e2\",\"mac\":\"f2:88:5f:1d:0a:cf\"},{\"name\":\"eth0\",\"mac\":\"1e:c6:5b:eb:c6:d4\",\"sandbox\":\"/var/run/netns/de69ec9c-b650-4219-9b68-b3b9062cf15a\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-9tljg_kube-system_02063573-f956-476d-9bae-54c2abbf38ec_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:54:15.385490254Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-9tljg","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri
-o.HostnamePath":"/run/containers/storage/overlay-containers/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-9tljg","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"02063573-f956-476d-9bae-54c2abbf38ec\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-9tljg\",\"pod-template-hash\":\"558bd4d5db\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-9tljg_02063573-f956-476d-9bae-54c2abbf38ec/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-9tljg\",\"uid\":\"02063573-f956-476d-9bae-54c2abbf38ec\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/055cc6e7a8b9cc9e86f953d7a00fb748acf093f5fa40c1cb6844
34f1ea2517c6/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-9tljg_kube-system_02063573-f956-476d-9bae-54c2abbf38ec_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-9tljg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"02063573-f956-476d-9bae-54c2abbf38ec","k8s-app":"kube-dns","kubernetes.
io/config.seen":"2021-08-10T22:53:20.661725080Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316","pid":1321,"status":"running","bundle":"/run/containers/storage/overlay-containers/a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316/userdata","rootfs":"/var/lib/containers/storage/overlay/0dac4eec877360c2cbb458d7fca6800c7f1c9af4c3a68d2b1e8d31929873af77/merged","created":"2021-08-10T22:52:52.741248056Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bde20ce","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bde20ce\",\"io.kubernetes.containe
r.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:52:52.445248353Z","io.kubernetes.cri-o.Image":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.3","io.kubernetes.cri-o.ImageRef":"6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210810225233-345780\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"065b8ee5be44c8bb759a20e5e68abf58\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-sche
duler-pause-20210810225233-345780_065b8ee5be44c8bb759a20e5e68abf58/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0dac4eec877360c2cbb458d7fca6800c7f1c9af4c3a68d2b1e8d31929873af77/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210810225233-345780_kube-system_065b8ee5be44c8bb759a20e5e68abf58_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210810225233-345780_kube-system_065b8ee5be44c8bb759a20e5e68abf58_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container
_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/065b8ee5be44c8bb759a20e5e68abf58/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/065b8ee5be44c8bb759a20e5e68abf58/containers/kube-scheduler/fc927cdf\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"065b8ee5be44c8bb759a20e5e68abf58","kubernetes.io/config.hash":"065b8ee5be44c8bb759a20e5e68abf58","kubernetes.io/config.seen":"2021-08-10T22:52:49.970643623Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a6593388812907c340a5fa020cc0b4facf94f44d79e341784e5986
53c7ce7e75","pid":1186,"status":"running","bundle":"/run/containers/storage/overlay-containers/a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75/userdata","rootfs":"/var/lib/containers/storage/overlay/4051995db417a9d6778d389a39ea41c261ccd9b7dbb2dd7a9b84b750e196309d/merged","created":"2021-08-10T22:52:51.509346737Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:52:49.970647167Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b674a9b0919cd96b03ee6b9415bb734d\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.49.2:8443\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210810225233-345780_kube-system_b674a9b0919cd96b03ee6b9415bb734d_0","io.
kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:52:51.351867951Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210810225233-345780","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210810225233-345780\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"b674a9b0919cd96b03ee6b9415bb734d\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210810225233-345780_b674a9b0919cd96b03ee6b9415bb734d/a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75.log","io.k
ubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210810225233-345780\",\"uid\":\"b674a9b0919cd96b03ee6b9415bb734d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4051995db417a9d6778d389a39ea41c261ccd9b7dbb2dd7a9b84b750e196309d/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-20210810225233-345780_kube-system_b674a9b0919cd96b03ee6b9415bb734d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run
/containers/storage/overlay-containers/a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b674a9b0919cd96b03ee6b9415bb734d","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"b674a9b0919cd96b03ee6b9415bb734d","kubernetes.io/config.seen":"2021-08-10T22:52:49.970647167Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f","pid":2066,"status":"running","bundle":"/run/containers/storage/overlay-containers/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f/userdata","rootfs":"/var/lib/containers/storage/overlay/27e60d11d9de46b808d545cdacf425a10890a0aba8e7937f529d9a414ddd360f/merged","created":"20
21-08-10T22:53:20.757446716Z","annotations":{"controller-revision-hash":"7cdcb64568","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:53:19.710142604Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-pwcm8_kube-system_6dee8a63-1575-445b-9978-c72ad86a1d79_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:53:20.632348013Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-pwcm8",
"io.kubernetes.cri-o.Labels":"{\"controller-revision-hash\":\"7cdcb64568\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"6dee8a63-1575-445b-9978-c72ad86a1d79\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-pwcm8\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pwcm8_6dee8a63-1575-445b-9978-c72ad86a1d79/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-pwcm8\",\"uid\":\"6dee8a63-1575-445b-9978-c72ad86a1d79\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/27e60d11d9de46b808d545cdacf425a10890a0aba8e7937f529d9a414ddd360f/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-pwcm8_kube-system_6dee8a63-1575-445b-9978-c72ad86a1d79_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}",
"io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f/userdata/shm","io.kubernetes.pod.name":"kube-proxy-pwcm8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"6dee8a63-1575-445b-9978-c72ad86a1d79","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-10T22:53:19.710142604Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e30f9d0a5b4993
72f984af2396fff7419f8f8e39af235ff27e067d04f72badba","pid":1176,"status":"running","bundle":"/run/containers/storage/overlay-containers/e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba/userdata","rootfs":"/var/lib/containers/storage/overlay/162f110630c0c13f4030a81bcf015a4bc06e9a0e6cba243e21a8edf0cf6f43be/merged","created":"2021-08-10T22:52:51.509378429Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"065b8ee5be44c8bb759a20e5e68abf58\",\"kubernetes.io/config.seen\":\"2021-08-10T22:52:49.970643623Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210810225233-345780_kube-system_065b8ee5be44c8bb759a20e5e68abf58_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kube
rnetes.cri-o.Created":"2021-08-10T22:52:51.344180698Z","io.kubernetes.cri-o.HostName":"pause-20210810225233-345780","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210810225233-345780","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"065b8ee5be44c8bb759a20e5e68abf58\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210810225233-345780\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210810225233-345780_065b8ee5be44c8bb759a20e5e68abf58/e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-schedu
ler-pause-20210810225233-345780\",\"uid\":\"065b8ee5be44c8bb759a20e5e68abf58\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/162f110630c0c13f4030a81bcf015a4bc06e9a0e6cba243e21a8edf0cf6f43be/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210810225233-345780_kube-system_065b8ee5be44c8bb759a20e5e68abf58_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/e30f9d0a5b4
99372f984af2396fff7419f8f8e39af235ff27e067d04f72badba/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210810225233-345780","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"065b8ee5be44c8bb759a20e5e68abf58","kubernetes.io/config.hash":"065b8ee5be44c8bb759a20e5e68abf58","kubernetes.io/config.seen":"2021-08-10T22:52:49.970643623Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0810 22:54:22.310612  536245 cri.go:113] list returned 14 containers
	I0810 22:54:22.310630  536245 cri.go:116] container: {ID:15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356 Status:running}
	I0810 22:54:22.310649  536245 cri.go:122] skipping {15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356 running}: state = "running", want "paused"
	I0810 22:54:22.310665  536245 cri.go:116] container: {ID:1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe Status:running}
	I0810 22:54:22.310670  536245 cri.go:118] skipping 1be5f8b230497e37eb89d82112a48af14492861493b4d76815fc37455586dcfe - not in ps
	I0810 22:54:22.310674  536245 cri.go:116] container: {ID:22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32 Status:running}
	I0810 22:54:22.310678  536245 cri.go:122] skipping {22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32 running}: state = "running", want "paused"
	I0810 22:54:22.310683  536245 cri.go:116] container: {ID:27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0 Status:running}
	I0810 22:54:22.310687  536245 cri.go:122] skipping {27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0 running}: state = "running", want "paused"
	I0810 22:54:22.310691  536245 cri.go:116] container: {ID:3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012 Status:running}
	I0810 22:54:22.310695  536245 cri.go:122] skipping {3351b40c4bee363375d0b5a179e8ae5918b157612199094478dd43e9002c7012 running}: state = "running", want "paused"
	I0810 22:54:22.310702  536245 cri.go:116] container: {ID:46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc Status:running}
	I0810 22:54:22.310711  536245 cri.go:122] skipping {46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc running}: state = "running", want "paused"
	I0810 22:54:22.310717  536245 cri.go:116] container: {ID:57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714 Status:running}
	I0810 22:54:22.310725  536245 cri.go:122] skipping {57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714 running}: state = "running", want "paused"
	I0810 22:54:22.310729  536245 cri.go:116] container: {ID:898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234 Status:running}
	I0810 22:54:22.310733  536245 cri.go:118] skipping 898bd0d9b428e8c8baa202278adc46d59d9a6de4a2a439c4cf13eccc2ed3c234 - not in ps
	I0810 22:54:22.310736  536245 cri.go:116] container: {ID:94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2 Status:running}
	I0810 22:54:22.310740  536245 cri.go:118] skipping 94f10dd306eb57df031f04dabe6e73cfbf4faccb3923c2274b984cefe27c8bd2 - not in ps
	I0810 22:54:22.310744  536245 cri.go:116] container: {ID:97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181 Status:running}
	I0810 22:54:22.310748  536245 cri.go:118] skipping 97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181 - not in ps
	I0810 22:54:22.310751  536245 cri.go:116] container: {ID:a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316 Status:running}
	I0810 22:54:22.310756  536245 cri.go:122] skipping {a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316 running}: state = "running", want "paused"
	I0810 22:54:22.310762  536245 cri.go:116] container: {ID:a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75 Status:running}
	I0810 22:54:22.310767  536245 cri.go:118] skipping a6593388812907c340a5fa020cc0b4facf94f44d79e341784e598653c7ce7e75 - not in ps
	I0810 22:54:22.310770  536245 cri.go:116] container: {ID:b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f Status:running}
	I0810 22:54:22.310774  536245 cri.go:118] skipping b7bae7384d157a2999c47c3849cc9192a7f68a83f1548f4227256eeec41d211f - not in ps
	I0810 22:54:22.310778  536245 cri.go:116] container: {ID:e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba Status:running}
	I0810 22:54:22.310782  536245 cri.go:118] skipping e30f9d0a5b499372f984af2396fff7419f8f8e39af235ff27e067d04f72badba - not in ps
	I0810 22:54:22.310833  536245 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0810 22:54:22.318647  536245 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0810 22:54:22.318677  536245 kubeadm.go:600] restartCluster start
	I0810 22:54:22.318732  536245 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0810 22:54:22.326024  536245 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0810 22:54:22.326866  536245 kubeconfig.go:93] found "pause-20210810225233-345780" server: "https://192.168.49.2:8443"
	I0810 22:54:22.327483  536245 kapi.go:59] client config for pause-20210810225233-345780: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:54:22.329186  536245 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0810 22:54:22.352663  536245 api_server.go:164] Checking apiserver status ...
	I0810 22:54:22.352731  536245 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:54:22.374341  536245 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1335/cgroup
	I0810 22:54:22.382567  536245 api_server.go:180] apiserver freezer: "9:freezer:/docker/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8/system.slice/crio-22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32.scope"
	I0810 22:54:22.382634  536245 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8/system.slice/crio-22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32.scope/freezer.state
	I0810 22:54:22.389384  536245 api_server.go:202] freezer state: "THAWED"
	I0810 22:54:22.389417  536245 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0810 22:54:22.394127  536245 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0810 22:54:22.412193  536245 system_pods.go:86] 7 kube-system pods found
	I0810 22:54:22.412225  536245 system_pods.go:89] "coredns-558bd4d5db-9tljg" [02063573-f956-476d-9bae-54c2abbf38ec] Running
	I0810 22:54:22.412231  536245 system_pods.go:89] "etcd-pause-20210810225233-345780" [e420c3c6-218d-4300-97aa-5b6d28964c69] Running
	I0810 22:54:22.412236  536245 system_pods.go:89] "kindnet-w546v" [8a257e43-80ad-47fa-a3e7-75ebf16ad3a7] Running
	I0810 22:54:22.412240  536245 system_pods.go:89] "kube-apiserver-pause-20210810225233-345780" [e22ddcf2-0d5a-458d-8670-a4510a1011ab] Running
	I0810 22:54:22.412244  536245 system_pods.go:89] "kube-controller-manager-pause-20210810225233-345780" [d3f97e84-ab4e-4d08-8204-df2d94ba6be3] Running
	I0810 22:54:22.412248  536245 system_pods.go:89] "kube-proxy-pwcm8" [6dee8a63-1575-445b-9978-c72ad86a1d79] Running
	I0810 22:54:22.412260  536245 system_pods.go:89] "kube-scheduler-pause-20210810225233-345780" [dcb8b14f-49ac-4f22-9b8f-401fae37f1dc] Running
	I0810 22:54:22.413353  536245 api_server.go:139] control plane version: v1.21.3
	I0810 22:54:22.413379  536245 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.49.2
	I0810 22:54:22.413391  536245 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0810 22:54:22.413397  536245 kubeadm.go:604] restartCluster took 94.714468ms
	I0810 22:54:22.413408  536245 kubeadm.go:392] StartCluster complete in 168.681159ms
	I0810 22:54:22.413428  536245 settings.go:142] acquiring lock: {Name:mka213f92e424859b3fea9ed3e06c1529c3d79c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:22.413574  536245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:54:22.414929  536245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mk4b0a8134f819d1f0c4fc03757f6964ae0e24de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:22.416042  536245 kapi.go:59] client config for pause-20210810225233-345780: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:54:22.420079  536245 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210810225233-345780" rescaled to 1
	I0810 22:54:22.420134  536245 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:54:22.424824  536245 out.go:177] * Verifying Kubernetes components...
	I0810 22:54:22.420167  536245 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0810 22:54:22.420176  536245 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0810 22:54:22.424887  536245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:54:22.425012  536245 addons.go:59] Setting storage-provisioner=true in profile "pause-20210810225233-345780"
	I0810 22:54:22.425034  536245 addons.go:135] Setting addon storage-provisioner=true in "pause-20210810225233-345780"
	I0810 22:54:22.425042  536245 addons.go:59] Setting default-storageclass=true in profile "pause-20210810225233-345780"
	W0810 22:54:22.425047  536245 addons.go:147] addon storage-provisioner should already be in state true
	I0810 22:54:22.425075  536245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210810225233-345780"
	I0810 22:54:22.425092  536245 host.go:66] Checking if "pause-20210810225233-345780" exists ...
	I0810 22:54:22.425414  536245 cli_runner.go:115] Run: docker container inspect pause-20210810225233-345780 --format={{.State.Status}}
	I0810 22:54:22.425652  536245 cli_runner.go:115] Run: docker container inspect pause-20210810225233-345780 --format={{.State.Status}}
	I0810 22:54:22.481400  536245 kapi.go:59] client config for pause-20210810225233-345780: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/pause-20210810225233-345780/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:54:22.486524  536245 addons.go:135] Setting addon default-storageclass=true in "pause-20210810225233-345780"
	W0810 22:54:22.486550  536245 addons.go:147] addon default-storageclass should already be in state true
	I0810 22:54:22.486582  536245 host.go:66] Checking if "pause-20210810225233-345780" exists ...
	I0810 22:54:22.487067  536245 cli_runner.go:115] Run: docker container inspect pause-20210810225233-345780 --format={{.State.Status}}
	I0810 22:54:18.398105  536187 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0810 22:54:18.398398  536187 start.go:160] libmachine.API.Create for "old-k8s-version-20210810225417-345780" (driver="docker")
	I0810 22:54:18.398431  536187 client.go:168] LocalClient.Create starting
	I0810 22:54:18.398552  536187 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 22:54:18.398589  536187 main.go:130] libmachine: Decoding PEM data...
	I0810 22:54:18.398622  536187 main.go:130] libmachine: Parsing certificate...
	I0810 22:54:18.398785  536187 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 22:54:18.398811  536187 main.go:130] libmachine: Decoding PEM data...
	I0810 22:54:18.398829  536187 main.go:130] libmachine: Parsing certificate...
	I0810 22:54:18.399241  536187 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210810225417-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0810 22:54:18.447140  536187 cli_runner.go:162] docker network inspect old-k8s-version-20210810225417-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0810 22:54:18.447244  536187 network_create.go:255] running [docker network inspect old-k8s-version-20210810225417-345780] to gather additional debugging logs...
	I0810 22:54:18.447271  536187 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210810225417-345780
	W0810 22:54:18.504244  536187 cli_runner.go:162] docker network inspect old-k8s-version-20210810225417-345780 returned with exit code 1
	I0810 22:54:18.504283  536187 network_create.go:258] error running [docker network inspect old-k8s-version-20210810225417-345780]: docker network inspect old-k8s-version-20210810225417-345780: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20210810225417-345780
	I0810 22:54:18.504301  536187 network_create.go:260] output of [docker network inspect old-k8s-version-20210810225417-345780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20210810225417-345780
	
	** /stderr **
	I0810 22:54:18.504367  536187 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 22:54:18.553469  536187 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-51e1127032be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fe:70:b4:31}}
	I0810 22:54:18.554090  536187 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-08818fa49fb2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:72:b1:84:3c}}
	I0810 22:54:18.554837  536187 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000114098] misses:0}
	I0810 22:54:18.554878  536187 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0810 22:54:18.554890  536187 network_create.go:106] attempt to create docker network old-k8s-version-20210810225417-345780 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0810 22:54:18.554947  536187 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210810225417-345780
	I0810 22:54:18.646511  536187 network_create.go:90] docker network old-k8s-version-20210810225417-345780 192.168.67.0/24 created
	I0810 22:54:18.646556  536187 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-20210810225417-345780" container
	I0810 22:54:18.646636  536187 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0810 22:54:18.703945  536187 cli_runner.go:115] Run: docker volume create old-k8s-version-20210810225417-345780 --label name.minikube.sigs.k8s.io=old-k8s-version-20210810225417-345780 --label created_by.minikube.sigs.k8s.io=true
	I0810 22:54:18.746272  536187 oci.go:102] Successfully created a docker volume old-k8s-version-20210810225417-345780
	I0810 22:54:18.746415  536187 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20210810225417-345780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210810225417-345780 --entrypoint /usr/bin/test -v old-k8s-version-20210810225417-345780:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0810 22:54:19.597328  536187 oci.go:106] Successfully prepared a docker volume old-k8s-version-20210810225417-345780
	W0810 22:54:19.597392  536187 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0810 22:54:19.597406  536187 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0810 22:54:19.597460  536187 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0810 22:54:19.597470  536187 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0810 22:54:19.597493  536187 kic.go:179] Starting extracting preloaded images to volume ...
	I0810 22:54:19.597567  536187 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210810225417-345780:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0810 22:54:19.691880  536187 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210810225417-345780 --name old-k8s-version-20210810225417-345780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210810225417-345780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210810225417-345780 --network old-k8s-version-20210810225417-345780 --ip 192.168.67.2 --volume old-k8s-version-20210810225417-345780:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0810 22:54:20.373628  536187 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210810225417-345780 --format={{.State.Running}}
	I0810 22:54:20.433869  536187 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210810225417-345780 --format={{.State.Status}}
	I0810 22:54:20.483539  536187 cli_runner.go:115] Run: docker exec old-k8s-version-20210810225417-345780 stat /var/lib/dpkg/alternatives/iptables
	I0810 22:54:20.636103  536187 oci.go:278] the created container "old-k8s-version-20210810225417-345780" has a running status.
	I0810 22:54:20.636150  536187 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210810225417-345780/id_rsa...
	I0810 22:54:20.938051  536187 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210810225417-345780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0810 22:54:21.320436  536187 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210810225417-345780 --format={{.State.Status}}
	I0810 22:54:21.375128  536187 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0810 22:54:21.375160  536187 kic_runner.go:115] Args: [docker exec --privileged old-k8s-version-20210810225417-345780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0810 22:54:22.493945  536245 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:54:22.494109  536245 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:54:22.494132  536245 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0810 22:54:22.494210  536245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210810225233-345780
	I0810 22:54:22.528193  536245 node_ready.go:35] waiting up to 6m0s for node "pause-20210810225233-345780" to be "Ready" ...
	I0810 22:54:22.528248  536245 start.go:716] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0810 22:54:22.532374  536245 node_ready.go:49] node "pause-20210810225233-345780" has status "Ready":"True"
	I0810 22:54:22.532397  536245 node_ready.go:38] duration metric: took 4.14988ms waiting for node "pause-20210810225233-345780" to be "Ready" ...
	I0810 22:54:22.532409  536245 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:54:22.538523  536245 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-9tljg" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:22.546745  536245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/pause-20210810225233-345780/id_rsa Username:docker}
	I0810 22:54:22.549161  536245 pod_ready.go:92] pod "coredns-558bd4d5db-9tljg" in "kube-system" namespace has status "Ready":"True"
	I0810 22:54:22.549185  536245 pod_ready.go:81] duration metric: took 10.634688ms waiting for pod "coredns-558bd4d5db-9tljg" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:22.549198  536245 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210810225233-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:22.551322  536245 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0810 22:54:22.551345  536245 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0810 22:54:22.551404  536245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210810225233-345780
	I0810 22:54:22.555422  536245 pod_ready.go:92] pod "etcd-pause-20210810225233-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:54:22.555447  536245 pod_ready.go:81] duration metric: took 6.240256ms waiting for pod "etcd-pause-20210810225233-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:22.555464  536245 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210810225233-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:22.560144  536245 pod_ready.go:92] pod "kube-apiserver-pause-20210810225233-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:54:22.560174  536245 pod_ready.go:81] duration metric: took 4.699573ms waiting for pod "kube-apiserver-pause-20210810225233-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:22.560192  536245 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210810225233-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:22.598073  536245 pod_ready.go:92] pod "kube-controller-manager-pause-20210810225233-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:54:22.598095  536245 pod_ready.go:81] duration metric: took 37.893688ms waiting for pod "kube-controller-manager-pause-20210810225233-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:22.598106  536245 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pwcm8" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:22.603944  536245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/pause-20210810225233-345780/id_rsa Username:docker}
	I0810 22:54:22.643116  536245 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:54:22.698903  536245 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0810 22:54:22.997356  536245 pod_ready.go:92] pod "kube-proxy-pwcm8" in "kube-system" namespace has status "Ready":"True"
	I0810 22:54:22.997382  536245 pod_ready.go:81] duration metric: took 399.268557ms waiting for pod "kube-proxy-pwcm8" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:22.997396  536245 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210810225233-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:23.011740  536245 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0810 22:54:23.011775  536245 addons.go:344] enableAddons completed in 591.61458ms
	I0810 22:54:23.398617  536245 pod_ready.go:92] pod "kube-scheduler-pause-20210810225233-345780" in "kube-system" namespace has status "Ready":"True"
	I0810 22:54:23.398649  536245 pod_ready.go:81] duration metric: took 401.244225ms waiting for pod "kube-scheduler-pause-20210810225233-345780" in "kube-system" namespace to be "Ready" ...
	I0810 22:54:23.398662  536245 pod_ready.go:38] duration metric: took 866.237571ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:54:23.398687  536245 api_server.go:50] waiting for apiserver process to appear ...
	I0810 22:54:23.398740  536245 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:54:23.425626  536245 api_server.go:70] duration metric: took 1.005453643s to wait for apiserver process to appear ...
	I0810 22:54:23.425658  536245 api_server.go:86] waiting for apiserver healthz status ...
	I0810 22:54:23.425680  536245 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0810 22:54:23.430531  536245 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0810 22:54:23.431455  536245 api_server.go:139] control plane version: v1.21.3
	I0810 22:54:23.431479  536245 api_server.go:129] duration metric: took 5.813795ms to wait for apiserver health ...
	I0810 22:54:23.431490  536245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0810 22:54:23.599904  536245 system_pods.go:59] 8 kube-system pods found
	I0810 22:54:23.599935  536245 system_pods.go:61] "coredns-558bd4d5db-9tljg" [02063573-f956-476d-9bae-54c2abbf38ec] Running
	I0810 22:54:23.599939  536245 system_pods.go:61] "etcd-pause-20210810225233-345780" [e420c3c6-218d-4300-97aa-5b6d28964c69] Running
	I0810 22:54:23.599943  536245 system_pods.go:61] "kindnet-w546v" [8a257e43-80ad-47fa-a3e7-75ebf16ad3a7] Running
	I0810 22:54:23.599947  536245 system_pods.go:61] "kube-apiserver-pause-20210810225233-345780" [e22ddcf2-0d5a-458d-8670-a4510a1011ab] Running
	I0810 22:54:23.599952  536245 system_pods.go:61] "kube-controller-manager-pause-20210810225233-345780" [d3f97e84-ab4e-4d08-8204-df2d94ba6be3] Running
	I0810 22:54:23.599958  536245 system_pods.go:61] "kube-proxy-pwcm8" [6dee8a63-1575-445b-9978-c72ad86a1d79] Running
	I0810 22:54:23.599962  536245 system_pods.go:61] "kube-scheduler-pause-20210810225233-345780" [dcb8b14f-49ac-4f22-9b8f-401fae37f1dc] Running
	I0810 22:54:23.599968  536245 system_pods.go:61] "storage-provisioner" [2c07bcdc-3e04-4897-98d3-e3c2e9120858] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0810 22:54:23.599974  536245 system_pods.go:74] duration metric: took 168.47906ms to wait for pod list to return data ...
	I0810 22:54:23.599989  536245 default_sa.go:34] waiting for default service account to be created ...
	I0810 22:54:23.798762  536245 default_sa.go:45] found service account: "default"
	I0810 22:54:23.798788  536245 default_sa.go:55] duration metric: took 198.792368ms for default service account to be created ...
	I0810 22:54:23.798796  536245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0810 22:54:24.000857  536245 system_pods.go:86] 8 kube-system pods found
	I0810 22:54:24.000885  536245 system_pods.go:89] "coredns-558bd4d5db-9tljg" [02063573-f956-476d-9bae-54c2abbf38ec] Running
	I0810 22:54:24.000891  536245 system_pods.go:89] "etcd-pause-20210810225233-345780" [e420c3c6-218d-4300-97aa-5b6d28964c69] Running
	I0810 22:54:24.000895  536245 system_pods.go:89] "kindnet-w546v" [8a257e43-80ad-47fa-a3e7-75ebf16ad3a7] Running
	I0810 22:54:24.000899  536245 system_pods.go:89] "kube-apiserver-pause-20210810225233-345780" [e22ddcf2-0d5a-458d-8670-a4510a1011ab] Running
	I0810 22:54:24.000903  536245 system_pods.go:89] "kube-controller-manager-pause-20210810225233-345780" [d3f97e84-ab4e-4d08-8204-df2d94ba6be3] Running
	I0810 22:54:24.000907  536245 system_pods.go:89] "kube-proxy-pwcm8" [6dee8a63-1575-445b-9978-c72ad86a1d79] Running
	I0810 22:54:24.000911  536245 system_pods.go:89] "kube-scheduler-pause-20210810225233-345780" [dcb8b14f-49ac-4f22-9b8f-401fae37f1dc] Running
	I0810 22:54:24.000953  536245 system_pods.go:89] "storage-provisioner" [2c07bcdc-3e04-4897-98d3-e3c2e9120858] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0810 22:54:24.000969  536245 system_pods.go:126] duration metric: took 202.166ms to wait for k8s-apps to be running ...
	I0810 22:54:24.000978  536245 system_svc.go:44] waiting for kubelet service to be running ....
	I0810 22:54:24.001033  536245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:54:24.011262  536245 system_svc.go:56] duration metric: took 10.271346ms WaitForService to wait for kubelet.
	I0810 22:54:24.011296  536245 kubeadm.go:547] duration metric: took 1.591133684s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0810 22:54:24.011328  536245 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:54:24.198736  536245 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0810 22:54:24.198776  536245 node_conditions.go:123] node cpu capacity is 8
	I0810 22:54:24.198794  536245 node_conditions.go:105] duration metric: took 187.459852ms to run NodePressure ...
	I0810 22:54:24.198807  536245 start.go:231] waiting for startup goroutines ...
	I0810 22:54:24.253822  536245 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0810 22:54:24.255761  536245 out.go:177] * Done! kubectl is now configured to use "pause-20210810225233-345780" cluster and "default" namespace by default
	I0810 22:54:25.961144  532757 out.go:204]   - Configuring RBAC rules ...
	I0810 22:54:26.375584  532757 cni.go:93] Creating CNI manager for ""
	I0810 22:54:26.375599  532757 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:54:23.994335  536187 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210810225417-345780:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.396714784s)
	I0810 22:54:23.994376  536187 kic.go:188] duration metric: took 4.396878 seconds to extract preloaded images to volume
	I0810 22:54:23.994502  536187 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210810225417-345780 --format={{.State.Status}}
	I0810 22:54:24.037381  536187 machine.go:88] provisioning docker machine ...
	I0810 22:54:24.037425  536187 ubuntu.go:169] provisioning hostname "old-k8s-version-20210810225417-345780"
	I0810 22:54:24.037486  536187 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210810225417-345780
	I0810 22:54:24.078895  536187 main.go:130] libmachine: Using SSH client type: native
	I0810 22:54:24.079098  536187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I0810 22:54:24.079124  536187 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210810225417-345780 && echo "old-k8s-version-20210810225417-345780" | sudo tee /etc/hostname
	I0810 22:54:24.248303  536187 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210810225417-345780
	
	I0810 22:54:24.248383  536187 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210810225417-345780
	I0810 22:54:24.302620  536187 main.go:130] libmachine: Using SSH client type: native
	I0810 22:54:24.302844  536187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I0810 22:54:24.302877  536187 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210810225417-345780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210810225417-345780/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210810225417-345780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:54:24.425430  536187 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:54:24.425465  536187 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:54:24.425494  536187 ubuntu.go:177] setting up certificates
	I0810 22:54:24.425508  536187 provision.go:83] configureAuth start
	I0810 22:54:24.425573  536187 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210810225417-345780
	I0810 22:54:24.471689  536187 provision.go:137] copyHostCerts
	I0810 22:54:24.471762  536187 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:54:24.471776  536187 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:54:24.471882  536187 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:54:24.471983  536187 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:54:24.471997  536187 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:54:24.472018  536187 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:54:24.472070  536187 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:54:24.472079  536187 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:54:24.472096  536187 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:54:24.472152  536187 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20210810225417-345780 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210810225417-345780]
	I0810 22:54:24.714953  536187 provision.go:171] copyRemoteCerts
	I0810 22:54:24.715010  536187 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:54:24.715047  536187 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210810225417-345780
	I0810 22:54:24.766429  536187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210810225417-345780/id_rsa Username:docker}
	I0810 22:54:24.866102  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:54:24.887102  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0810 22:54:24.903786  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0810 22:54:24.921299  536187 provision.go:86] duration metric: configureAuth took 495.775685ms
	I0810 22:54:24.921328  536187 ubuntu.go:193] setting minikube options for container-runtime
	I0810 22:54:24.921639  536187 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210810225417-345780
	I0810 22:54:24.969591  536187 main.go:130] libmachine: Using SSH client type: native
	I0810 22:54:24.969774  536187 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I0810 22:54:24.969799  536187 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:54:25.338402  536187 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:54:25.338437  536187 machine.go:91] provisioned docker machine in 1.301028205s
	I0810 22:54:25.338447  536187 client.go:171] LocalClient.Create took 6.940010337s
	I0810 22:54:25.338468  536187 start.go:168] duration metric: libmachine.API.Create for "old-k8s-version-20210810225417-345780" took 6.940072432s
	I0810 22:54:25.338485  536187 start.go:267] post-start starting for "old-k8s-version-20210810225417-345780" (driver="docker")
	I0810 22:54:25.338492  536187 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:54:25.338558  536187 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:54:25.338611  536187 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210810225417-345780
	I0810 22:54:25.385079  536187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210810225417-345780/id_rsa Username:docker}
	I0810 22:54:25.469108  536187 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:54:25.472021  536187 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0810 22:54:25.472041  536187 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0810 22:54:25.472049  536187 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0810 22:54:25.472055  536187 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0810 22:54:25.472065  536187 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:54:25.472139  536187 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:54:25.472224  536187 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> 3457802.pem in /etc/ssl/certs
	I0810 22:54:25.472319  536187 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:54:25.478774  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:54:25.496317  536187 start.go:270] post-start completed in 157.819162ms
	I0810 22:54:25.497182  536187 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210810225417-345780
	I0810 22:54:25.537721  536187 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/config.json ...
	I0810 22:54:25.537950  536187 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:54:25.537991  536187 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210810225417-345780
	I0810 22:54:25.579692  536187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210810225417-345780/id_rsa Username:docker}
	I0810 22:54:25.668905  536187 start.go:129] duration metric: createHost completed in 7.273114787s
	I0810 22:54:25.668961  536187 start.go:80] releasing machines lock for "old-k8s-version-20210810225417-345780", held for 7.273309921s
	I0810 22:54:25.669055  536187 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210810225417-345780
	I0810 22:54:25.713913  536187 ssh_runner.go:149] Run: systemctl --version
	I0810 22:54:25.713968  536187 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210810225417-345780
	I0810 22:54:25.714038  536187 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:54:25.714105  536187 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210810225417-345780
	I0810 22:54:25.764982  536187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210810225417-345780/id_rsa Username:docker}
	I0810 22:54:25.777479  536187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210810225417-345780/id_rsa Username:docker}
	I0810 22:54:25.853676  536187 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:54:25.893656  536187 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:54:25.905248  536187 docker.go:153] disabling docker service ...
	I0810 22:54:25.905301  536187 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:54:25.915373  536187 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:54:25.924320  536187 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:54:26.006366  536187 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:54:26.101095  536187 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:54:26.111330  536187 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:54:26.125548  536187 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.1"|' -i /etc/crio/crio.conf"
	I0810 22:54:26.134951  536187 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0810 22:54:26.134979  536187 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0810 22:54:26.147662  536187 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:54:26.159053  536187 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:54:26.159129  536187 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:54:26.169964  536187 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:54:26.179293  536187 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:54:26.255806  536187 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:54:26.266839  536187 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:54:26.266923  536187 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:54:26.270275  536187 start.go:417] Will wait 60s for crictl version
	I0810 22:54:26.270329  536187 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:54:26.299189  536187 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0810 22:54:26.299258  536187 ssh_runner.go:149] Run: crio --version
	I0810 22:54:26.378518  536187 ssh_runner.go:149] Run: crio --version
	I0810 22:54:26.377849  532757 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0810 22:54:26.377926  532757 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0810 22:54:26.381879  532757 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 22:54:26.381893  532757 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0810 22:54:26.396982  532757 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0810 22:54:26.790820  532757 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0810 22:54:26.790936  532757 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:54:26.790942  532757 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=cert-options-20210810225357-345780 minikube.k8s.io/updated_at=2021_08_10T22_54_26_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:54:26.886925  532757 kubeadm.go:985] duration metric: took 96.054476ms to wait for elevateKubeSystemPrivileges.
	I0810 22:54:26.908456  532757 ops.go:34] apiserver oom_adj: -16
	I0810 22:54:26.908481  532757 kubeadm.go:392] StartCluster complete in 19.986388809s
	I0810 22:54:26.908502  532757 settings.go:142] acquiring lock: {Name:mka213f92e424859b3fea9ed3e06c1529c3d79c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:26.908593  532757 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:54:26.910135  532757 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mk4b0a8134f819d1f0c4fc03757f6964ae0e24de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:27.427672  532757 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cert-options-20210810225357-345780" rescaled to 1
	I0810 22:54:27.427728  532757 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8555 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:54:27.427750  532757 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0810 22:54:27.430442  532757 out.go:177] * Verifying Kubernetes components...
	I0810 22:54:27.430506  532757 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:54:27.427844  532757 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0810 22:54:27.430569  532757 addons.go:59] Setting storage-provisioner=true in profile "cert-options-20210810225357-345780"
	I0810 22:54:27.430585  532757 addons.go:135] Setting addon storage-provisioner=true in "cert-options-20210810225357-345780"
	I0810 22:54:27.430582  532757 addons.go:59] Setting default-storageclass=true in profile "cert-options-20210810225357-345780"
	W0810 22:54:27.430592  532757 addons.go:147] addon storage-provisioner should already be in state true
	I0810 22:54:27.430601  532757 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-options-20210810225357-345780"
	I0810 22:54:27.430624  532757 host.go:66] Checking if "cert-options-20210810225357-345780" exists ...
	I0810 22:54:27.431012  532757 cli_runner.go:115] Run: docker container inspect cert-options-20210810225357-345780 --format={{.State.Status}}
	I0810 22:54:27.431161  532757 cli_runner.go:115] Run: docker container inspect cert-options-20210810225357-345780 --format={{.State.Status}}
	I0810 22:54:26.455752  536187 out.go:177] * Preparing Kubernetes v1.14.0 on CRI-O 1.20.3 ...
	I0810 22:54:26.455850  536187 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210810225417-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 22:54:26.505198  536187 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0810 22:54:26.508828  536187 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:54:26.519247  536187 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0810 22:54:26.519311  536187 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:54:26.564883  536187 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:54:26.564905  536187 crio.go:333] Images already preloaded, skipping extraction
	I0810 22:54:26.564987  536187 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:54:26.590889  536187 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:54:26.590915  536187 cache_images.go:74] Images are preloaded, skipping loading
	I0810 22:54:26.590981  536187 ssh_runner.go:149] Run: crio config
	I0810 22:54:26.669529  536187 cni.go:93] Creating CNI manager for ""
	I0810 22:54:26.669553  536187 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:54:26.669564  536187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:54:26.669575  536187 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210810225417-345780 NodeName:old-k8s-version-20210810225417-345780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clie
ntCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:54:26.669740  536187 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-20210810225417-345780"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210810225417-345780
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:54:26.669850  536187 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-20210810225417-345780 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210810225417-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:54:26.669913  536187 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0810 22:54:26.677870  536187 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 22:54:26.677961  536187 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0810 22:54:26.685545  536187 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (640 bytes)
	I0810 22:54:26.698218  536187 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:54:26.711232  536187 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2149 bytes)
	I0810 22:54:26.723635  536187 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0810 22:54:26.726555  536187 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:54:26.735991  536187 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780 for IP: 192.168.67.2
	I0810 22:54:26.736041  536187 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:54:26.736058  536187 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:54:26.736107  536187 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.key
	I0810 22:54:26.736116  536187 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt with IP's: []
	I0810 22:54:26.858282  536187 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt ...
	I0810 22:54:26.858331  536187 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: {Name:mkd60c6ee1b1d5bbd2b458311b712360f04dd362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:26.858604  536187 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.key ...
	I0810 22:54:26.858631  536187 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.key: {Name:mk48cd95086dba10350198771d624bf178f2c814 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:26.858778  536187 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.key.c7fa3a9e
	I0810 22:54:26.858793  536187 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0810 22:54:27.067833  536187 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.crt.c7fa3a9e ...
	I0810 22:54:27.067871  536187 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.crt.c7fa3a9e: {Name:mkf8e35a6904f9753c0c2083e56b3df878f6128e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:27.068077  536187 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.key.c7fa3a9e ...
	I0810 22:54:27.068091  536187 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.key.c7fa3a9e: {Name:mk800862e87b64fd83dc91cc41b9227d425c9cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:27.068166  536187 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.crt
	I0810 22:54:27.068225  536187 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.key
	I0810 22:54:27.068273  536187 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/proxy-client.key
	I0810 22:54:27.068282  536187 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/proxy-client.crt with IP's: []
	I0810 22:54:27.149147  536187 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/proxy-client.crt ...
	I0810 22:54:27.149188  536187 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/proxy-client.crt: {Name:mk0d3295b703939a19f3008bdbd234f4d07bd0a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:27.149397  536187 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/proxy-client.key ...
	I0810 22:54:27.149412  536187 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/proxy-client.key: {Name:mka46df24e12bae2a2fb0f92b2e64e68738da3e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:27.149652  536187 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem (1338 bytes)
	W0810 22:54:27.149697  536187 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780_empty.pem, impossibly tiny 0 bytes
	I0810 22:54:27.149706  536187 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0810 22:54:27.149729  536187 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:54:27.149754  536187 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:54:27.149777  536187 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:54:27.149831  536187 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 22:54:27.150796  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0810 22:54:27.169857  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0810 22:54:27.187767  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0810 22:54:27.205394  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0810 22:54:27.222904  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:54:27.240441  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:54:27.257127  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:54:27.273287  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0810 22:54:27.289406  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:54:27.305728  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem --> /usr/share/ca-certificates/345780.pem (1338 bytes)
	I0810 22:54:27.321540  536187 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /usr/share/ca-certificates/3457802.pem (1708 bytes)
	I0810 22:54:27.338775  536187 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0810 22:54:27.351568  536187 ssh_runner.go:149] Run: openssl version
	I0810 22:54:27.356428  536187 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:54:27.364162  536187 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:54:27.367322  536187 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:54:27.367377  536187 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:54:27.372167  536187 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:54:27.379625  536187 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/345780.pem && ln -fs /usr/share/ca-certificates/345780.pem /etc/ssl/certs/345780.pem"
	I0810 22:54:27.386968  536187 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/345780.pem
	I0810 22:54:27.389817  536187 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:29 /usr/share/ca-certificates/345780.pem
	I0810 22:54:27.389863  536187 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/345780.pem
	I0810 22:54:27.394577  536187 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/345780.pem /etc/ssl/certs/51391683.0"
	I0810 22:54:27.401519  536187 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3457802.pem && ln -fs /usr/share/ca-certificates/3457802.pem /etc/ssl/certs/3457802.pem"
	I0810 22:54:27.408905  536187 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3457802.pem
	I0810 22:54:27.412098  536187 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:29 /usr/share/ca-certificates/3457802.pem
	I0810 22:54:27.412145  536187 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3457802.pem
	I0810 22:54:27.416807  536187 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3457802.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 22:54:27.423871  536187 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210810225417-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210810225417-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:54:27.423970  536187 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0810 22:54:27.424012  536187 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:54:27.455672  536187 cri.go:76] found id: ""
	I0810 22:54:27.455765  536187 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0810 22:54:27.467656  536187 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0810 22:54:27.475984  536187 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0810 22:54:27.476052  536187 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 22:54:27.483807  536187 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 22:54:27.483859  536187 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0810 22:54:27.485223  532757 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:54:27.485369  532757 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:54:27.485378  532757 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0810 22:54:27.485447  532757 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210810225357-345780
	I0810 22:54:27.496768  532757 addons.go:135] Setting addon default-storageclass=true in "cert-options-20210810225357-345780"
	W0810 22:54:27.496782  532757 addons.go:147] addon default-storageclass should already be in state true
	I0810 22:54:27.496813  532757 host.go:66] Checking if "cert-options-20210810225357-345780" exists ...
	I0810 22:54:27.497382  532757 cli_runner.go:115] Run: docker container inspect cert-options-20210810225357-345780 --format={{.State.Status}}
	I0810 22:54:27.518402  532757 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0810 22:54:27.520964  532757 api_server.go:50] waiting for apiserver process to appear ...
	I0810 22:54:27.521004  532757 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:54:27.553038  532757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/cert-options-20210810225357-345780/id_rsa Username:docker}
	I0810 22:54:27.558967  532757 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0810 22:54:27.558985  532757 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0810 22:54:27.559049  532757 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210810225357-345780
	I0810 22:54:27.615199  532757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/cert-options-20210810225357-345780/id_rsa Username:docker}
	I0810 22:54:27.675838  532757 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:54:27.773496  532757 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0810 22:54:27.881490  532757 start.go:736] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0810 22:54:27.881523  532757 api_server.go:70] duration metric: took 453.75855ms to wait for apiserver process to appear ...
	I0810 22:54:27.881541  532757 api_server.go:86] waiting for apiserver healthz status ...
	I0810 22:54:27.881555  532757 api_server.go:239] Checking apiserver healthz at https://192.168.76.2:8555/healthz ...
	I0810 22:54:27.894802  532757 api_server.go:265] https://192.168.76.2:8555/healthz returned 200:
	ok
	I0810 22:54:27.895940  532757 api_server.go:139] control plane version: v1.21.3
	I0810 22:54:27.895956  532757 api_server.go:129] duration metric: took 14.410389ms to wait for apiserver health ...
	I0810 22:54:27.895965  532757 system_pods.go:43] waiting for kube-system pods to appear ...
	I0810 22:54:27.905071  532757 system_pods.go:59] 0 kube-system pods found
	I0810 22:54:27.905104  532757 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
	I0810 22:54:28.171046  532757 system_pods.go:59] 1 kube-system pods found
	I0810 22:54:28.171068  532757 system_pods.go:61] "storage-provisioner" [4c73966f-7aa3-4716-8c98-f95a060a02ca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0810 22:54:28.171082  532757 retry.go:31] will retry after 381.329545ms: only 1 pod(s) have shown up
	I0810 22:54:27.887072  536187 out.go:204]   - Generating certificates and keys ...
	I0810 22:54:28.176876  532757 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0810 22:54:28.176903  532757 addons.go:344] enableAddons completed in 749.07197ms
	I0810 22:54:28.556406  532757 system_pods.go:59] 1 kube-system pods found
	I0810 22:54:28.556429  532757 system_pods.go:61] "storage-provisioner" [4c73966f-7aa3-4716-8c98-f95a060a02ca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0810 22:54:28.556445  532757 retry.go:31] will retry after 422.765636ms: only 1 pod(s) have shown up
	I0810 22:54:28.983237  532757 system_pods.go:59] 1 kube-system pods found
	I0810 22:54:28.983255  532757 system_pods.go:61] "storage-provisioner" [4c73966f-7aa3-4716-8c98-f95a060a02ca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0810 22:54:28.983269  532757 retry.go:31] will retry after 473.074753ms: only 1 pod(s) have shown up
	I0810 22:54:29.460211  532757 system_pods.go:59] 1 kube-system pods found
	I0810 22:54:29.460231  532757 system_pods.go:61] "storage-provisioner" [4c73966f-7aa3-4716-8c98-f95a060a02ca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0810 22:54:29.460244  532757 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up
	I0810 22:54:30.051229  532757 system_pods.go:59] 1 kube-system pods found
	I0810 22:54:30.051248  532757 system_pods.go:61] "storage-provisioner" [4c73966f-7aa3-4716-8c98-f95a060a02ca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0810 22:54:30.051261  532757 retry.go:31] will retry after 834.206799ms: only 1 pod(s) have shown up
	I0810 22:54:30.893680  532757 system_pods.go:59] 1 kube-system pods found
	I0810 22:54:30.893699  532757 system_pods.go:61] "storage-provisioner" [4c73966f-7aa3-4716-8c98-f95a060a02ca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0810 22:54:30.893713  532757 retry.go:31] will retry after 746.553905ms: only 1 pod(s) have shown up
	I0810 22:54:31.936193  536187 out.go:204]   - Booting up control plane ...
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Tue 2021-08-10 22:52:35 UTC, end at Tue 2021-08-10 22:54:36 UTC. --
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.857032895Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.859112816Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.862845038Z" level=info msg="Conmon does support the --sync option"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.862941592Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.862953810Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.870318243Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.873285153Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.876527569Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.890975120Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.891014787Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 10 22:54:20 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:20.329644985Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-9tljg Namespace:kube-system ID:97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181 NetNS:/var/run/netns/de69ec9c-b650-4219-9b68-b3b9062cf15a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 10 22:54:20 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:20.329948897Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 10 22:54:20 pause-20210810225233-345780 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.311949034Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=c67644af-aeef-4c28-a3c4-9f34ca652936 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.458179604Z" level=info msg="Ran pod sandbox 37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f with infra container: kube-system/storage-provisioner/POD" id=c67644af-aeef-4c28-a3c4-9f34ca652936 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.459955429Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=13158592-0516-4aa6-9c98-d99698544242 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.461417486Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=13158592-0516-4aa6-9c98-d99698544242 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.466360851Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e7cedc74-2fd3-4ad5-8979-4030d1c93062 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.467217861Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e7cedc74-2fd3-4ad5-8979-4030d1c93062 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.468056817Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=67b2845e-cd71-4994-b793-4660722f8fed name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.480865848Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/96a65636ee386c6cc0fd629c9f54aac7dd527b612c86286bc34b46437b7530fc/merged/etc/passwd: no such file or directory"
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.481144221Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/96a65636ee386c6cc0fd629c9f54aac7dd527b612c86286bc34b46437b7530fc/merged/etc/group: no such file or directory"
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.641551609Z" level=info msg="Created container 26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d: kube-system/storage-provisioner/storage-provisioner" id=67b2845e-cd71-4994-b793-4660722f8fed name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.642142924Z" level=info msg="Starting container: 26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d" id=009bc771-8c7b-4782-8a11-c0ec244eec7b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.652081064Z" level=info msg="Started container 26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d: kube-system/storage-provisioner/storage-provisioner" id=009bc771-8c7b-4782-8a11-c0ec244eec7b name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	26755da5fc303       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago       Running             storage-provisioner       0                   37ab8d7270bbc
	27f23a1705c7a       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   20 seconds ago       Running             coredns                   0                   97e17aed56acd
	3351b40c4bee3       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   About a minute ago   Running             kindnet-cni               0                   898bd0d9b428e
	15938cb1c549c       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   About a minute ago   Running             kube-proxy                0                   b7bae7384d157
	46e328447ea1f       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   1be5f8b230497
	a540c3a2f6071       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   e30f9d0a5b499
	22e7cf1507477       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   a659338881290
	57127954bdfc1       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   94f10dd306eb5
	
	* 
	* ==> coredns [27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.005471] IPv4: martian source 10.88.0.3 from 10.88.0.3, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8e 22 8a 79 ab 40 08 06        .......".y.@..
	[  +0.000005] IPv4: martian source 10.88.0.3 from 10.88.0.3, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 8e 22 8a 79 ab 40 08 06        .......".y.@..
	[Aug10 22:52] IPv4: martian source 10.88.0.5 from 10.88.0.5, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 16 94 ad a7 98 49 08 06        ...........I..
	[  +0.000193] IPv4: martian source 10.88.0.4 from 10.88.0.4, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 51 85 f7 b2 26 08 06        .......Q...&..
	[ +18.482598] cgroup: cgroup2: unknown option "nsdelegate"
	[ +18.125191] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug10 22:53] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.448442] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 16 d4 57 d2 55 58 08 06        ........W.UX..
	[  +0.000007] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 16 d4 57 d2 55 58 08 06        ........W.UX..
	[  +0.217157] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e 34 6d 63 a8 9e 08 06        .......4mc....
	[  +8.255571] cgroup: cgroup2: unknown option "nsdelegate"
	[ +19.470015] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 89 da dc dc 8a 08 06        ......>.......
	[  +0.519360] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug10 22:54] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb1cc53e2
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c6 5b eb c6 d4 08 06        ........[.....
	[  +5.184517] cgroup: cgroup2: unknown option "nsdelegate"
	[ +22.322021] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc] <==
	* 2021-08-10 22:53:04.823785 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (1.775139388s) to execute
	2021-08-10 22:53:04.823936 W | etcdserver: read-only range request "key:\"/registry/flowschemas/probes\" " with result "range_response_count:1 size:945" took too long (2.227468173s) to execute
	2021-08-10 22:53:18.474090 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:53:27.597008 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:53:32.083305 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (897.67387ms) to execute
	2021-08-10 22:53:32.083407 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-9tljg\" " with result "range_response_count:1 size:4461" took too long (1.104277292s) to execute
	2021-08-10 22:53:34.242609 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.587690587s) to execute
	2021-08-10 22:53:34.242649 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.053365072s) to execute
	2021-08-10 22:53:34.242728 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-9tljg\" " with result "range_response_count:1 size:4461" took too long (1.26356793s) to execute
	2021-08-10 22:53:34.242841 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-kzdm8\" " with result "range_response_count:1 size:4473" took too long (1.590538009s) to execute
	2021-08-10 22:53:37.596282 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:53:47.596768 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:53:57.597080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:54:07.596533 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:54:11.182408 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-9tljg\" " with result "range_response_count:1 size:4461" took too long (203.318149ms) to execute
	2021-08-10 22:54:14.689413 W | wal: sync duration of 1.181409265s, expected less than 1s
	2021-08-10 22:54:15.366235 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.181847543s) to execute
	2021-08-10 22:54:15.366320 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-9tljg\" " with result "range_response_count:1 size:4461" took too long (1.387470153s) to execute
	2021-08-10 22:54:15.366365 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.037761789s) to execute
	2021-08-10 22:54:15.366541 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (908.765084ms) to execute
	2021-08-10 22:54:17.596701 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:54:31.850592 W | wal: sync duration of 1.041792917s, expected less than 1s
	2021-08-10 22:54:31.929726 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (146.505751ms) to execute
	2021-08-10 22:54:31.929941 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:6149" took too long (255.08033ms) to execute
	2021-08-10 22:54:35.205475 W | wal: sync duration of 3.187765059s, expected less than 1s
	
	* 
	* ==> kernel <==
	*  22:54:46 up  2:37,  0 users,  load average: 5.48, 3.59, 2.57
	Linux pause-20210810225233-345780 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32] <==
	* I0810 22:53:06.863054       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0810 22:53:12.237090       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0810 22:53:19.694668       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0810 22:53:20.393804       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0810 22:53:24.637835       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:53:24.637878       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:53:24.637886       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:53:32.084592       1 trace.go:205] Trace[1971463959]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-9tljg,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (10-Aug-2021 22:53:30.978) (total time: 1105ms):
	Trace[1971463959]: ---"About to write a response" 1105ms (22:53:00.084)
	Trace[1971463959]: [1.105995789s] [1.105995789s] END
	I0810 22:53:34.243822       1 trace.go:205] Trace[366570254]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-kzdm8,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (10-Aug-2021 22:53:32.651) (total time: 1592ms):
	Trace[366570254]: ---"About to write a response" 1591ms (22:53:00.243)
	Trace[366570254]: [1.592224682s] [1.592224682s] END
	I0810 22:53:34.244313       1 trace.go:205] Trace[120626499]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-9tljg,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (10-Aug-2021 22:53:32.978) (total time: 1265ms):
	Trace[120626499]: ---"About to write a response" 1265ms (22:53:00.243)
	Trace[120626499]: [1.265879308s] [1.265879308s] END
	I0810 22:53:57.903048       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:53:57.903107       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:53:57.903118       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:54:15.367508       1 trace.go:205] Trace[1949810703]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-9tljg,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (10-Aug-2021 22:54:13.978) (total time: 1389ms):
	Trace[1949810703]: ---"About to write a response" 1388ms (22:54:00.366)
	Trace[1949810703]: [1.389123285s] [1.389123285s] END
	I0810 22:54:31.650486       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:54:31.650530       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:54:31.650539       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714] <==
	* I0810 22:53:19.705538       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w546v"
	E0810 22:53:19.724304       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"c7b81213-a61c-463f-880d-31df965c74df", ResourceVersion:"391", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764232786, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002208ed0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002208ee8)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002208f00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002208f18)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0020ccda0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00191ff00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002208f30), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002208f48), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0020ccde0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001019920), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002264428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b7ae00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0022562a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002264478)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I0810 22:53:19.726190       1 shared_informer.go:247] Caches are synced for deployment 
	I0810 22:53:19.757128       1 shared_informer.go:247] Caches are synced for expand 
	I0810 22:53:19.757246       1 shared_informer.go:247] Caches are synced for disruption 
	I0810 22:53:19.757255       1 disruption.go:371] Sending events to api server.
	I0810 22:53:19.757295       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0810 22:53:19.795016       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0810 22:53:19.841418       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0810 22:53:19.851502       1 shared_informer.go:247] Caches are synced for resource quota 
	I0810 22:53:19.863997       1 shared_informer.go:247] Caches are synced for endpoint 
	I0810 22:53:19.891334       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0810 22:53:19.940369       1 shared_informer.go:247] Caches are synced for resource quota 
	I0810 22:53:20.355265       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0810 22:53:20.385042       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0810 22:53:20.385066       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0810 22:53:20.395872       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0810 22:53:20.408179       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0810 22:53:20.651012       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-kzdm8"
	I0810 22:53:20.657132       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-9tljg"
	I0810 22:53:20.679594       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-kzdm8"
	I0810 22:53:24.598620       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0810 22:53:24.598942       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-kzdm8" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-558bd4d5db-kzdm8"
	I0810 22:53:24.598970       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-9tljg" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-558bd4d5db-9tljg"
	
	* 
	* ==> kube-proxy [15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356] <==
	* I0810 22:53:21.387127       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0810 22:53:21.387215       1 server_others.go:140] Detected node IP 192.168.49.2
	W0810 22:53:21.387248       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0810 22:53:21.409577       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0810 22:53:21.409607       1 server_others.go:212] Using iptables Proxier.
	I0810 22:53:21.409618       1 server_others.go:219] creating dualStackProxier for iptables.
	W0810 22:53:21.409628       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0810 22:53:21.409956       1 server.go:643] Version: v1.21.3
	I0810 22:53:21.411443       1 config.go:224] Starting endpoint slice config controller
	I0810 22:53:21.411647       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0810 22:53:21.411558       1 config.go:315] Starting service config controller
	I0810 22:53:21.411796       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0810 22:53:21.415382       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0810 22:53:21.416507       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0810 22:53:21.512697       1 shared_informer.go:247] Caches are synced for service config 
	I0810 22:53:21.512715       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316] <==
	* E0810 22:52:58.953592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:52:59.130553       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0810 22:52:59.139759       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:52:59.164281       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:52:59.226794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:52:59.250049       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:52:59.294570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:52:59.336846       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:53:00.524690       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0810 22:53:00.827329       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:53:01.159808       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:53:01.259500       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:53:01.526752       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:53:01.567389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:53:01.602565       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:53:01.687584       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0810 22:53:01.728893       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:53:01.856858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0810 22:53:01.873391       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0810 22:53:01.878650       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:53:02.306388       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:53:02.415055       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:53:04.509812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:53:04.738008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0810 22:53:07.160403       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-10 22:52:35 UTC, end at Tue 2021-08-10 22:54:46 UTC. --
	Aug 10 22:54:26 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:26.434009    4045 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115386    4045 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:false CgroupRoot: CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115438    4045 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115455    4045 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115464    4045 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115637    4045 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115672    4045 remote_runtime.go:62] parsed scheme: ""
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115678    4045 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115749    4045 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115760    4045 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115820    4045 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115831    4045 remote_image.go:50] parsed scheme: ""
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115835    4045 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115843    4045 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115847    4045 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115915    4045 kubelet.go:404] "Attempting to sync node with API server"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115931    4045 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115955    4045 kubelet.go:283] "Adding apiserver pod source"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115969    4045 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.124591    4045 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="cri-o" version="1.20.3" apiVersion="v1alpha1"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: E0810 22:54:31.422858    4045 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.423463    4045 server.go:1190] "Started kubelet"
	Aug 10 22:54:31 pause-20210810225233-345780 systemd[1]: kubelet.service: Succeeded.
	Aug 10 22:54:31 pause-20210810225233-345780 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d] <==
	* I0810 22:54:23.661988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0810 22:54:23.671948       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0810 22:54:23.672008       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0810 22:54:23.685630       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0810 22:54:23.685819       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210810225233-345780_45d49143-1b99-4ab6-a806-f691e591097c!
	I0810 22:54:23.685827       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6761ff01-40ae-48fe-9365-e1e10384b78a", APIVersion:"v1", ResourceVersion:"515", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210810225233-345780_45d49143-1b99-4ab6-a806-f691e591097c became leader
	I0810 22:54:23.786938       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210810225233-345780_45d49143-1b99-4ab6-a806-f691e591097c!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 22:54:46.530545  540826 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210810225233-345780
helpers_test.go:236: (dbg) docker inspect pause-20210810225233-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8",
	        "Created": "2021-08-10T22:52:34.698259045Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515550,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-10T22:52:35.182460391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8/hostname",
	        "HostsPath": "/var/lib/docker/containers/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8/hosts",
	        "LogPath": "/var/lib/docker/containers/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8/f944c8ee123ae8cb8d65ab0281838033c9de5b5a14746bf2015aeeb3281d1af8-json.log",
	        "Name": "/pause-20210810225233-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210810225233-345780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210810225233-345780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cd2fd6f184e67618f831d8314c23e4cf96be00c8a82cf64a9b5e6394f60054e2-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd2fd6f184e67618f831d8314c23e4cf96be00c8a82cf64a9b5e6394f60054e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd2fd6f184e67618f831d8314c23e4cf96be00c8a82cf64a9b5e6394f60054e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd2fd6f184e67618f831d8314c23e4cf96be00c8a82cf64a9b5e6394f60054e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210810225233-345780",
	                "Source": "/var/lib/docker/volumes/pause-20210810225233-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210810225233-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210810225233-345780",
	                "name.minikube.sigs.k8s.io": "pause-20210810225233-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f404af36dee4eaa00a4f1a79dba38668589f298b9dea0f5fe10c13defbea12c9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f404af36dee4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210810225233-345780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f944c8ee123a"
	                    ],
	                    "NetworkID": "51e1127032be582af01cfcb85b893562f9fc6c893e0e850dd2e1e3269326ab00",
	                    "EndpointID": "414033aeecc037bf5562dc276192e2977befd5451fdd0adc84be4523c20d6a3a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210810225233-345780 -n pause-20210810225233-345780
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210810225233-345780 -n pause-20210810225233-345780: exit status 2 (383.232553ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210810225233-345780 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210810225233-345780 logs -n 25: exit status 110 (13.608149456s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                    Args                    |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                         | scheduled-stop-20210810224840-345780       | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:08 UTC | Tue, 10 Aug 2021 22:49:08 UTC |
	|         | scheduled-stop-20210810224840-345780       |                                            |         |         |                               |                               |
	|         | --cancel-scheduled                         |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210810224840-345780       | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:21 UTC | Tue, 10 Aug 2021 22:49:38 UTC |
	|         | scheduled-stop-20210810224840-345780       |                                            |         |         |                               |                               |
	|         | --schedule 5s                              |                                            |         |         |                               |                               |
	| delete  | -p                                         | scheduled-stop-20210810224840-345780       | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:38 UTC | Tue, 10 Aug 2021 22:49:43 UTC |
	|         | scheduled-stop-20210810224840-345780       |                                            |         |         |                               |                               |
	| delete  | -p                                         | insufficient-storage-20210810224943-345780 | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:51 UTC | Tue, 10 Aug 2021 22:49:57 UTC |
	|         | insufficient-storage-20210810224943-345780 |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210810224957-345780   | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:57 UTC | Tue, 10 Aug 2021 22:50:53 UTC |
	|         | kubernetes-upgrade-20210810224957-345780   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0               |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p                                         | offline-crio-20210810224957-345780         | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:49:57 UTC | Tue, 10 Aug 2021 22:51:43 UTC |
	|         | offline-crio-20210810224957-345780         |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                     |                                            |         |         |                               |                               |
	|         | --memory=2048 --wait=true                  |                                            |         |         |                               |                               |
	|         | --driver=docker                            |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | offline-crio-20210810224957-345780         | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:51:44 UTC | Tue, 10 Aug 2021 22:51:47 UTC |
	|         | offline-crio-20210810224957-345780         |                                            |         |         |                               |                               |
	| delete  | -p                                         | running-upgrade-20210810224957-345780      | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:28 UTC | Tue, 10 Aug 2021 22:52:31 UTC |
	|         | running-upgrade-20210810224957-345780      |                                            |         |         |                               |                               |
	| delete  | -p                                         | stopped-upgrade-20210810224957-345780      | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:45 UTC | Tue, 10 Aug 2021 22:52:48 UTC |
	|         | stopped-upgrade-20210810224957-345780      |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubenet-20210810225248-345780              | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:48 UTC | Tue, 10 Aug 2021 22:52:48 UTC |
	|         | kubenet-20210810225248-345780              |                                            |         |         |                               |                               |
	| delete  | -p                                         | flannel-20210810225248-345780              | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:48 UTC | Tue, 10 Aug 2021 22:52:49 UTC |
	|         | flannel-20210810225248-345780              |                                            |         |         |                               |                               |
	| delete  | -p false-20210810225249-345780             | false-20210810225249-345780                | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:49 UTC | Tue, 10 Aug 2021 22:52:49 UTC |
	| start   | -p                                         | force-systemd-flag-20210810225249-345780   | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:49 UTC | Tue, 10 Aug 2021 22:53:34 UTC |
	|         | force-systemd-flag-20210810225249-345780   |                                            |         |         |                               |                               |
	|         | --memory=2048 --force-systemd              |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-flag-20210810225249-345780   | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:53:34 UTC | Tue, 10 Aug 2021 22:53:37 UTC |
	|         | force-systemd-flag-20210810225249-345780   |                                            |         |         |                               |                               |
	| start   | -p                                         | missing-upgrade-20210810225147-345780      | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:53:06 UTC | Tue, 10 Aug 2021 22:53:54 UTC |
	|         | missing-upgrade-20210810225147-345780      |                                            |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | missing-upgrade-20210810225147-345780      | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:53:54 UTC | Tue, 10 Aug 2021 22:53:57 UTC |
	|         | missing-upgrade-20210810225147-345780      |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-env-20210810225337-345780    | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:53:37 UTC | Tue, 10 Aug 2021 22:54:11 UTC |
	|         | force-systemd-env-20210810225337-345780    |                                            |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=5 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-env-20210810225337-345780    | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:54:13 UTC | Tue, 10 Aug 2021 22:54:17 UTC |
	|         | force-systemd-env-20210810225337-345780    |                                            |         |         |                               |                               |
	| start   | -p pause-20210810225233-345780             | pause-20210810225233-345780                | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:52:33 UTC | Tue, 10 Aug 2021 22:54:17 UTC |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --install-addons=false                     |                                            |         |         |                               |                               |
	|         | --wait=all --driver=docker                 |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p pause-20210810225233-345780             | pause-20210810225233-345780                | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:54:18 UTC | Tue, 10 Aug 2021 22:54:24 UTC |
	|         | --alsologtostderr                          |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| pause   | -p pause-20210810225233-345780             | pause-20210810225233-345780                | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:54:24 UTC | Tue, 10 Aug 2021 22:54:24 UTC |
	|         | --alsologtostderr -v=5                     |                                            |         |         |                               |                               |
	| unpause | -p pause-20210810225233-345780             | pause-20210810225233-345780                | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:54:25 UTC | Tue, 10 Aug 2021 22:54:25 UTC |
	|         | --alsologtostderr -v=5                     |                                            |         |         |                               |                               |
	| start   | -p                                         | cert-options-20210810225357-345780         | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:53:57 UTC | Tue, 10 Aug 2021 22:54:36 UTC |
	|         | cert-options-20210810225357-345780         |                                            |         |         |                               |                               |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                  |                                            |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15              |                                            |         |         |                               |                               |
	|         | --apiserver-names=localhost                |                                            |         |         |                               |                               |
	|         | --apiserver-names=www.google.com           |                                            |         |         |                               |                               |
	|         | --apiserver-port=8555                      |                                            |         |         |                               |                               |
	|         | --driver=docker                            |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| -p      | cert-options-20210810225357-345780         | cert-options-20210810225357-345780         | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:54:36 UTC | Tue, 10 Aug 2021 22:54:36 UTC |
	|         | ssh openssl x509 -text -noout -in          |                                            |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt      |                                            |         |         |                               |                               |
	| delete  | -p                                         | cert-options-20210810225357-345780         | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:54:36 UTC | Tue, 10 Aug 2021 22:54:39 UTC |
	|         | cert-options-20210810225357-345780         |                                            |         |         |                               |                               |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:54:39
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:54:39.903951  542009 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:54:39.904053  542009 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:54:39.904058  542009 out.go:311] Setting ErrFile to fd 2...
	I0810 22:54:39.904061  542009 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:54:39.904172  542009 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:54:39.904467  542009 out.go:305] Setting JSON to false
	I0810 22:54:39.941234  542009 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":9441,"bootTime":1628626639,"procs":269,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:54:39.941345  542009 start.go:121] virtualization: kvm guest
	I0810 22:54:39.944577  542009 out.go:177] * [no-preload-20210810225439-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:54:39.946879  542009 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:54:39.944761  542009 notify.go:169] Checking for updates...
	I0810 22:54:39.948781  542009 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:54:39.950750  542009 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:54:39.952326  542009 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:54:39.953263  542009 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:54:40.016317  542009 docker.go:132] docker version: linux-19.03.15
	I0810 22:54:40.016418  542009 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:54:40.108835  542009 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:53 SystemTime:2021-08-10 22:54:40.056878372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:54:40.108992  542009 docker.go:244] overlay module found
	I0810 22:54:40.111566  542009 out.go:177] * Using the docker driver based on user configuration
	I0810 22:54:40.111603  542009 start.go:278] selected driver: docker
	I0810 22:54:40.111613  542009 start.go:751] validating driver "docker" against <nil>
	I0810 22:54:40.111637  542009 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0810 22:54:40.111737  542009 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:54:40.111768  542009 out.go:242] ! Your cgroup does not allow setting memory.
	I0810 22:54:40.113441  542009 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:54:40.114409  542009 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:54:40.211884  542009 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:53 SystemTime:2021-08-10 22:54:40.152727376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:54:40.212045  542009 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0810 22:54:40.212259  542009 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0810 22:54:40.212288  542009 cni.go:93] Creating CNI manager for ""
	I0810 22:54:40.212296  542009 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:54:40.212304  542009 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0810 22:54:40.212336  542009 start_flags.go:277] config:
	{Name:no-preload-20210810225439-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210810225439-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:54:40.215078  542009 out.go:177] * Starting control plane node no-preload-20210810225439-345780 in cluster no-preload-20210810225439-345780
	I0810 22:54:40.215137  542009 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:54:40.216903  542009 out.go:177] * Pulling base image ...
	I0810 22:54:40.216975  542009 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0810 22:54:40.217094  542009 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0810 22:54:40.217136  542009 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210810225439-345780/config.json ...
	I0810 22:54:40.217181  542009 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210810225439-345780/config.json: {Name:mk96916255854297706c6e6b082239870a3cc450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:54:40.217466  542009 cache.go:108] acquiring lock: {Name:mk2992684e28e28c0a4befdb8ebb26ca589cb57f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:40.217513  542009 cache.go:108] acquiring lock: {Name:mkfe25cfc62d7940332ec761acdf8bda40d35906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:40.217546  542009 cache.go:108] acquiring lock: {Name:mka516412e4443a77db4aae7c0ad8e25e39db91f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:40.217543  542009 cache.go:108] acquiring lock: {Name:mkd094bf87ebf585424cb6c91d65711af3bd40fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:40.217544  542009 cache.go:108] acquiring lock: {Name:mkbdfa3defe6d3385cdc7fd98eb8ed8245d220a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:40.217586  542009 cache.go:108] acquiring lock: {Name:mkdd5a8f62294b913ddd55d3704a0b589ed3eba9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:40.217669  542009 cache.go:108] acquiring lock: {Name:mk3464b8a855c2d3d972a6d466d5d9f3158f321c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:40.217676  542009 cache.go:108] acquiring lock: {Name:mk4f10ee3a8c88fd626836d2165f6d91bcdd8d77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:40.217455  542009 cache.go:108] acquiring lock: {Name:mk2ba872ee84b32342558df767208e9f26a5a614 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:40.217675  542009 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0810 22:54:40.217716  542009 cache.go:108] acquiring lock: {Name:mk424aee259face7c113807a02e8507dd3f19426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:40.217748  542009 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0810 22:54:40.217772  542009 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0810 22:54:40.217776  542009 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 194.54µs
	I0810 22:54:40.217801  542009 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0810 22:54:40.217809  542009 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0810 22:54:40.217829  542009 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 115.511µs
	I0810 22:54:40.217841  542009 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0810 22:54:40.217842  542009 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0810 22:54:40.217855  542009 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 368.863µs
	I0810 22:54:40.217868  542009 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0810 22:54:40.217883  542009 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0810 22:54:40.217903  542009 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 440.474µs
	I0810 22:54:40.217919  542009 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0810 22:54:40.217844  542009 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0810 22:54:40.217695  542009 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0810 22:54:40.217951  542009 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0810 22:54:40.217965  542009 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 481.425µs
	I0810 22:54:40.217976  542009 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0810 22:54:40.218009  542009 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-3
	I0810 22:54:40.221296  542009 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0810 22:54:40.364165  542009 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:54:40.364212  542009 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:54:40.364231  542009 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:54:40.364280  542009 start.go:313] acquiring machines lock for no-preload-20210810225439-345780: {Name:mkbca71304c4e8e9735b53d85704fd600ab03c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:54:40.364509  542009 start.go:317] acquired machines lock for "no-preload-20210810225439-345780" in 196.89µs
	I0810 22:54:40.364548  542009 start.go:89] Provisioning new machine with config: &{Name:no-preload-20210810225439-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210810225439-345780 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0810 22:54:40.365369  542009 start.go:126] createHost starting for "" (driver="docker")
	I0810 22:54:40.373519  542009 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0810 22:54:40.373820  542009 start.go:160] libmachine.API.Create for "no-preload-20210810225439-345780" (driver="docker")
	I0810 22:54:40.373865  542009 client.go:168] LocalClient.Create starting
	I0810 22:54:40.373978  542009 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 22:54:40.374107  542009 main.go:130] libmachine: Decoding PEM data...
	I0810 22:54:40.374134  542009 main.go:130] libmachine: Parsing certificate...
	I0810 22:54:40.374267  542009 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 22:54:40.374297  542009 main.go:130] libmachine: Decoding PEM data...
	I0810 22:54:40.374317  542009 main.go:130] libmachine: Parsing certificate...
	I0810 22:54:40.374714  542009 cli_runner.go:115] Run: docker network inspect no-preload-20210810225439-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0810 22:54:40.434556  542009 cli_runner.go:162] docker network inspect no-preload-20210810225439-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0810 22:54:40.434647  542009 network_create.go:255] running [docker network inspect no-preload-20210810225439-345780] to gather additional debugging logs...
	I0810 22:54:40.434683  542009 cli_runner.go:115] Run: docker network inspect no-preload-20210810225439-345780
	I0810 22:54:40.480567  542009 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0
	W0810 22:54:40.492178  542009 cli_runner.go:162] docker network inspect no-preload-20210810225439-345780 returned with exit code 1
	I0810 22:54:40.492224  542009 network_create.go:258] error running [docker network inspect no-preload-20210810225439-345780]: docker network inspect no-preload-20210810225439-345780: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20210810225439-345780
	I0810 22:54:40.492244  542009 network_create.go:260] output of [docker network inspect no-preload-20210810225439-345780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20210810225439-345780
	
	** /stderr **
	I0810 22:54:40.492345  542009 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 22:54:40.570222  542009 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-51e1127032be IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fe:70:b4:31}}
	I0810 22:54:40.571029  542009 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-08818fa49fb2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:72:b1:84:3c}}
	I0810 22:54:40.571873  542009 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-34cb86f15cd3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:7e:77:13:a1}}
	I0810 22:54:40.572870  542009 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc00058e030] misses:0}
	I0810 22:54:40.572914  542009 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0810 22:54:40.572948  542009 network_create.go:106] attempt to create docker network no-preload-20210810225439-345780 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0810 22:54:40.573066  542009 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20210810225439-345780
	I0810 22:54:40.758975  542009 network_create.go:90] docker network no-preload-20210810225439-345780 192.168.76.0/24 created
	I0810 22:54:40.759019  542009 kic.go:106] calculated static IP "192.168.76.2" for the "no-preload-20210810225439-345780" container
	I0810 22:54:40.759101  542009 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0810 22:54:40.820522  542009 cli_runner.go:115] Run: docker volume create no-preload-20210810225439-345780 --label name.minikube.sigs.k8s.io=no-preload-20210810225439-345780 --label created_by.minikube.sigs.k8s.io=true
	I0810 22:54:40.887958  542009 oci.go:102] Successfully created a docker volume no-preload-20210810225439-345780
	I0810 22:54:40.888100  542009 cli_runner.go:115] Run: docker run --rm --name no-preload-20210810225439-345780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210810225439-345780 --entrypoint /usr/bin/test -v no-preload-20210810225439-345780:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0810 22:54:40.941653  542009 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0810 22:54:40.941714  542009 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 724.280862ms
	I0810 22:54:40.941733  542009 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0810 22:54:40.968267  542009 image.go:171] found k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 locally: &{Image:0xc0000a26c0}
	I0810 22:54:40.968319  542009 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0
	I0810 22:54:41.817204  542009 image.go:171] found k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 locally: &{Image:0xc0000a2480}
	I0810 22:54:41.817258  542009 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0
	I0810 22:54:41.865749  542009 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 locally: &{Image:0xc0011a00a0}
	I0810 22:54:41.865799  542009 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0
	I0810 22:54:41.895205  542009 cli_runner.go:168] Completed: docker run --rm --name no-preload-20210810225439-345780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210810225439-345780 --entrypoint /usr/bin/test -v no-preload-20210810225439-345780:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (1.007010401s)
	I0810 22:54:41.895250  542009 oci.go:106] Successfully prepared a docker volume no-preload-20210810225439-345780
	W0810 22:54:41.895300  542009 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0810 22:54:41.895315  542009 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0810 22:54:41.895313  542009 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0810 22:54:41.895395  542009 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0810 22:54:42.022615  542009 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20210810225439-345780 --name no-preload-20210810225439-345780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210810225439-345780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20210810225439-345780 --network no-preload-20210810225439-345780 --ip 192.168.76.2 --volume no-preload-20210810225439-345780:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0810 22:54:42.699309  542009 cli_runner.go:115] Run: docker container inspect no-preload-20210810225439-345780 --format={{.State.Running}}
	I0810 22:54:42.805214  542009 cli_runner.go:115] Run: docker container inspect no-preload-20210810225439-345780 --format={{.State.Status}}
	I0810 22:54:42.895603  542009 cli_runner.go:115] Run: docker exec no-preload-20210810225439-345780 stat /var/lib/dpkg/alternatives/iptables
	I0810 22:54:42.974163  542009 image.go:171] found k8s.gcr.io/etcd:3.4.13-3 locally: &{Image:0xc00121a1e0}
	I0810 22:54:42.974213  542009 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3
	I0810 22:54:43.074976  542009 oci.go:278] the created container "no-preload-20210810225439-345780" has a running status.
	I0810 22:54:43.075017  542009 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/no-preload-20210810225439-345780/id_rsa...
	I0810 22:54:43.321784  542009 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/no-preload-20210810225439-345780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0810 22:54:43.844472  542009 cli_runner.go:115] Run: docker container inspect no-preload-20210810225439-345780 --format={{.State.Status}}
	I0810 22:54:43.907240  542009 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0810 22:54:43.907267  542009 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210810225439-345780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0810 22:54:44.089656  542009 cli_runner.go:115] Run: docker container inspect no-preload-20210810225439-345780 --format={{.State.Status}}
	I0810 22:54:44.131398  542009 machine.go:88] provisioning docker machine ...
	I0810 22:54:44.131491  542009 ubuntu.go:169] provisioning hostname "no-preload-20210810225439-345780"
	I0810 22:54:44.131559  542009 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210810225439-345780
	I0810 22:54:44.176512  542009 main.go:130] libmachine: Using SSH client type: native
	I0810 22:54:44.176736  542009 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0810 22:54:44.176763  542009 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210810225439-345780 && echo "no-preload-20210810225439-345780" | sudo tee /etc/hostname
	I0810 22:54:44.203196  542009 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0810 22:54:44.203261  542009 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 3.985590867s
	I0810 22:54:44.203288  542009 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0810 22:54:44.307476  542009 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210810225439-345780
	
	I0810 22:54:44.307578  542009 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210810225439-345780
	I0810 22:54:44.351607  542009 main.go:130] libmachine: Using SSH client type: native
	I0810 22:54:44.351830  542009 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0810 22:54:44.351862  542009 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210810225439-345780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210810225439-345780/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210810225439-345780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:54:44.469142  542009 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:54:44.469184  542009 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:54:44.469217  542009 ubuntu.go:177] setting up certificates
	I0810 22:54:44.469229  542009 provision.go:83] configureAuth start
	I0810 22:54:44.469295  542009 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210810225439-345780
	I0810 22:54:44.520783  542009 provision.go:137] copyHostCerts
	I0810 22:54:44.520869  542009 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:54:44.520884  542009 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:54:44.520970  542009 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:54:44.521075  542009 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:54:44.521092  542009 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:54:44.521125  542009 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:54:44.521196  542009 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:54:44.521211  542009 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:54:44.521236  542009 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:54:44.521294  542009 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210810225439-345780 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210810225439-345780]
	I0810 22:54:44.759346  542009 provision.go:171] copyRemoteCerts
	I0810 22:54:44.759426  542009 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:54:44.759485  542009 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210810225439-345780
	I0810 22:54:44.805343  542009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/no-preload-20210810225439-345780/id_rsa Username:docker}
	I0810 22:54:45.438400  536187 out.go:204]   - Configuring RBAC rules ...
	I0810 22:54:45.857313  536187 cni.go:93] Creating CNI manager for ""
	I0810 22:54:45.857345  536187 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Tue 2021-08-10 22:52:35 UTC, end at Tue 2021-08-10 22:54:47 UTC. --
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.857032895Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.859112816Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.862845038Z" level=info msg="Conmon does support the --sync option"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.862941592Z" level=info msg="No seccomp profile specified, using the internal default"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.862953810Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.870318243Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.873285153Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.876527569Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.890975120Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 10 22:54:19 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:19.891014787Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 10 22:54:20 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:20.329644985Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-9tljg Namespace:kube-system ID:97e17aed56acdaa0e3ff90b3dea55cffd35d24451d900582ae64379d0ea18181 NetNS:/var/run/netns/de69ec9c-b650-4219-9b68-b3b9062cf15a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 10 22:54:20 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:20.329948897Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 10 22:54:20 pause-20210810225233-345780 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.311949034Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=c67644af-aeef-4c28-a3c4-9f34ca652936 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.458179604Z" level=info msg="Ran pod sandbox 37ab8d7270bbc15eaf7f9d636f2f134400fa8039885f6b8c54586f2f2e7af62f with infra container: kube-system/storage-provisioner/POD" id=c67644af-aeef-4c28-a3c4-9f34ca652936 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.459955429Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=13158592-0516-4aa6-9c98-d99698544242 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.461417486Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=13158592-0516-4aa6-9c98-d99698544242 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.466360851Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e7cedc74-2fd3-4ad5-8979-4030d1c93062 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.467217861Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e7cedc74-2fd3-4ad5-8979-4030d1c93062 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.468056817Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=67b2845e-cd71-4994-b793-4660722f8fed name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.480865848Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/96a65636ee386c6cc0fd629c9f54aac7dd527b612c86286bc34b46437b7530fc/merged/etc/passwd: no such file or directory"
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.481144221Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/96a65636ee386c6cc0fd629c9f54aac7dd527b612c86286bc34b46437b7530fc/merged/etc/group: no such file or directory"
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.641551609Z" level=info msg="Created container 26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d: kube-system/storage-provisioner/storage-provisioner" id=67b2845e-cd71-4994-b793-4660722f8fed name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.642142924Z" level=info msg="Starting container: 26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d" id=009bc771-8c7b-4782-8a11-c0ec244eec7b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 10 22:54:23 pause-20210810225233-345780 crio[2984]: time="2021-08-10 22:54:23.652081064Z" level=info msg="Started container 26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d: kube-system/storage-provisioner/storage-provisioner" id=009bc771-8c7b-4782-8a11-c0ec244eec7b name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	26755da5fc303       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   24 seconds ago       Running             storage-provisioner       0                   37ab8d7270bbc
	27f23a1705c7a       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   32 seconds ago       Running             coredns                   0                   97e17aed56acd
	3351b40c4bee3       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   About a minute ago   Running             kindnet-cni               0                   898bd0d9b428e
	15938cb1c549c       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   About a minute ago   Running             kube-proxy                0                   b7bae7384d157
	46e328447ea1f       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   About a minute ago   Running             etcd                      0                   1be5f8b230497
	a540c3a2f6071       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   About a minute ago   Running             kube-scheduler            0                   e30f9d0a5b499
	22e7cf1507477       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   About a minute ago   Running             kube-apiserver            0                   a659338881290
	57127954bdfc1       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   About a minute ago   Running             kube-controller-manager   0                   94f10dd306eb5
	
	* 
	* ==> coredns [27f23a1705c7a7fbd33de81890a87faa1ca3597a7360b842018be60364c42dc0] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.005471] IPv4: martian source 10.88.0.3 from 10.88.0.3, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8e 22 8a 79 ab 40 08 06        .......".y.@..
	[  +0.000005] IPv4: martian source 10.88.0.3 from 10.88.0.3, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 8e 22 8a 79 ab 40 08 06        .......".y.@..
	[Aug10 22:52] IPv4: martian source 10.88.0.5 from 10.88.0.5, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 16 94 ad a7 98 49 08 06        ...........I..
	[  +0.000193] IPv4: martian source 10.88.0.4 from 10.88.0.4, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 51 85 f7 b2 26 08 06        .......Q...&..
	[ +18.482598] cgroup: cgroup2: unknown option "nsdelegate"
	[ +18.125191] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug10 22:53] cgroup: cgroup2: unknown option "nsdelegate"
	[  +5.448442] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 16 d4 57 d2 55 58 08 06        ........W.UX..
	[  +0.000007] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 16 d4 57 d2 55 58 08 06        ........W.UX..
	[  +0.217157] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e 34 6d 63 a8 9e 08 06        .......4mc....
	[  +8.255571] cgroup: cgroup2: unknown option "nsdelegate"
	[ +19.470015] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e 89 da dc dc 8a 08 06        ......>.......
	[  +0.519360] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug10 22:54] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb1cc53e2
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e c6 5b eb c6 d4 08 06        ........[.....
	[  +5.184517] cgroup: cgroup2: unknown option "nsdelegate"
	[ +22.322021] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [46e328447ea1fe24d95ec4ed097463a52c1cea665882c211412dd92b479dd7fc] <==
	* 2021-08-10 22:53:04.823785 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (1.775139388s) to execute
	2021-08-10 22:53:04.823936 W | etcdserver: read-only range request "key:\"/registry/flowschemas/probes\" " with result "range_response_count:1 size:945" took too long (2.227468173s) to execute
	2021-08-10 22:53:18.474090 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:53:27.597008 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:53:32.083305 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (897.67387ms) to execute
	2021-08-10 22:53:32.083407 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-9tljg\" " with result "range_response_count:1 size:4461" took too long (1.104277292s) to execute
	2021-08-10 22:53:34.242609 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.587690587s) to execute
	2021-08-10 22:53:34.242649 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.053365072s) to execute
	2021-08-10 22:53:34.242728 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-9tljg\" " with result "range_response_count:1 size:4461" took too long (1.26356793s) to execute
	2021-08-10 22:53:34.242841 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-kzdm8\" " with result "range_response_count:1 size:4473" took too long (1.590538009s) to execute
	2021-08-10 22:53:37.596282 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:53:47.596768 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:53:57.597080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:54:07.596533 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:54:11.182408 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-9tljg\" " with result "range_response_count:1 size:4461" took too long (203.318149ms) to execute
	2021-08-10 22:54:14.689413 W | wal: sync duration of 1.181409265s, expected less than 1s
	2021-08-10 22:54:15.366235 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.181847543s) to execute
	2021-08-10 22:54:15.366320 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-9tljg\" " with result "range_response_count:1 size:4461" took too long (1.387470153s) to execute
	2021-08-10 22:54:15.366365 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.037761789s) to execute
	2021-08-10 22:54:15.366541 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (908.765084ms) to execute
	2021-08-10 22:54:17.596701 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:54:31.850592 W | wal: sync duration of 1.041792917s, expected less than 1s
	2021-08-10 22:54:31.929726 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (146.505751ms) to execute
	2021-08-10 22:54:31.929941 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:6149" took too long (255.08033ms) to execute
	2021-08-10 22:54:35.205475 W | wal: sync duration of 3.187765059s, expected less than 1s
	
	* 
	* ==> kernel <==
	*  22:55:00 up  2:37,  0 users,  load average: 4.93, 3.56, 2.58
	Linux pause-20210810225233-345780 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [22e7cf150747782d1758ba33cf5b652f6966b3b29e2f9bd35f7429689a2ece32] <==
	* I0810 22:53:06.863054       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0810 22:53:12.237090       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0810 22:53:19.694668       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0810 22:53:20.393804       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0810 22:53:24.637835       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:53:24.637878       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:53:24.637886       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:53:32.084592       1 trace.go:205] Trace[1971463959]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-9tljg,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (10-Aug-2021 22:53:30.978) (total time: 1105ms):
	Trace[1971463959]: ---"About to write a response" 1105ms (22:53:00.084)
	Trace[1971463959]: [1.105995789s] [1.105995789s] END
	I0810 22:53:34.243822       1 trace.go:205] Trace[366570254]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-kzdm8,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (10-Aug-2021 22:53:32.651) (total time: 1592ms):
	Trace[366570254]: ---"About to write a response" 1591ms (22:53:00.243)
	Trace[366570254]: [1.592224682s] [1.592224682s] END
	I0810 22:53:34.244313       1 trace.go:205] Trace[120626499]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-9tljg,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (10-Aug-2021 22:53:32.978) (total time: 1265ms):
	Trace[120626499]: ---"About to write a response" 1265ms (22:53:00.243)
	Trace[120626499]: [1.265879308s] [1.265879308s] END
	I0810 22:53:57.903048       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:53:57.903107       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:53:57.903118       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:54:15.367508       1 trace.go:205] Trace[1949810703]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-9tljg,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (10-Aug-2021 22:54:13.978) (total time: 1389ms):
	Trace[1949810703]: ---"About to write a response" 1388ms (22:54:00.366)
	Trace[1949810703]: [1.389123285s] [1.389123285s] END
	I0810 22:54:31.650486       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:54:31.650530       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:54:31.650539       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [57127954bdfc1b436f0de0ffe71c79d052e1d35a862b48eb1e81cac769685714] <==
	* I0810 22:53:19.705538       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w546v"
	E0810 22:53:19.724304       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"c7b81213-a61c-463f-880d-31df965c74df", ResourceVersion:"391", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764232786, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002208ed0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002208ee8)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002208f00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002208f18)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0020ccda0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00191ff00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002208f30), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002208f48), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0020ccde0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001019920), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002264428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b7ae00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0022562a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002264478)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I0810 22:53:19.726190       1 shared_informer.go:247] Caches are synced for deployment 
	I0810 22:53:19.757128       1 shared_informer.go:247] Caches are synced for expand 
	I0810 22:53:19.757246       1 shared_informer.go:247] Caches are synced for disruption 
	I0810 22:53:19.757255       1 disruption.go:371] Sending events to api server.
	I0810 22:53:19.757295       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0810 22:53:19.795016       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0810 22:53:19.841418       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0810 22:53:19.851502       1 shared_informer.go:247] Caches are synced for resource quota 
	I0810 22:53:19.863997       1 shared_informer.go:247] Caches are synced for endpoint 
	I0810 22:53:19.891334       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0810 22:53:19.940369       1 shared_informer.go:247] Caches are synced for resource quota 
	I0810 22:53:20.355265       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0810 22:53:20.385042       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0810 22:53:20.385066       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0810 22:53:20.395872       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0810 22:53:20.408179       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0810 22:53:20.651012       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-kzdm8"
	I0810 22:53:20.657132       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-9tljg"
	I0810 22:53:20.679594       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-kzdm8"
	I0810 22:53:24.598620       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0810 22:53:24.598942       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-kzdm8" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-558bd4d5db-kzdm8"
	I0810 22:53:24.598970       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-9tljg" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-558bd4d5db-9tljg"
	
	* 
	* ==> kube-proxy [15938cb1c549c958fdfb0ddb147d424c060bdad051e195cc09076553c2b02356] <==
	* I0810 22:53:21.387127       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0810 22:53:21.387215       1 server_others.go:140] Detected node IP 192.168.49.2
	W0810 22:53:21.387248       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0810 22:53:21.409577       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0810 22:53:21.409607       1 server_others.go:212] Using iptables Proxier.
	I0810 22:53:21.409618       1 server_others.go:219] creating dualStackProxier for iptables.
	W0810 22:53:21.409628       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0810 22:53:21.409956       1 server.go:643] Version: v1.21.3
	I0810 22:53:21.411443       1 config.go:224] Starting endpoint slice config controller
	I0810 22:53:21.411647       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0810 22:53:21.411558       1 config.go:315] Starting service config controller
	I0810 22:53:21.411796       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0810 22:53:21.415382       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0810 22:53:21.416507       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0810 22:53:21.512697       1 shared_informer.go:247] Caches are synced for service config 
	I0810 22:53:21.512715       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [a540c3a2f6071673a8e1e384ab3908831f0950442ec4ad560309ca061cc61316] <==
	* E0810 22:52:59.130553       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0810 22:52:59.139759       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:52:59.164281       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:52:59.226794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:52:59.250049       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:52:59.294570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:52:59.336846       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:53:00.524690       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0810 22:53:00.827329       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:53:01.159808       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:53:01.259500       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:53:01.526752       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:53:01.567389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:53:01.602565       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:53:01.687584       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0810 22:53:01.728893       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:53:01.856858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0810 22:53:01.873391       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0810 22:53:01.878650       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:53:02.306388       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:53:02.415055       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:53:04.509812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:53:04.738008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0810 22:53:07.160403       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0810 22:54:51.609898       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-10 22:52:35 UTC, end at Tue 2021-08-10 22:55:00 UTC. --
	Aug 10 22:54:26 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:26.434009    4045 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115386    4045 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:false CgroupRoot: CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115438    4045 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115455    4045 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115464    4045 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115637    4045 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115672    4045 remote_runtime.go:62] parsed scheme: ""
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115678    4045 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115749    4045 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115760    4045 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115820    4045 util_unix.go:103] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/crio/crio.sock" fullURLFormat="unix:///var/run/crio/crio.sock"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115831    4045 remote_image.go:50] parsed scheme: ""
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115835    4045 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115843    4045 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115847    4045 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115915    4045 kubelet.go:404] "Attempting to sync node with API server"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115931    4045 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115955    4045 kubelet.go:283] "Adding apiserver pod source"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.115969    4045 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.124591    4045 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="cri-o" version="1.20.3" apiVersion="v1alpha1"
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: E0810 22:54:31.422858    4045 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 10 22:54:31 pause-20210810225233-345780 kubelet[4045]: I0810 22:54:31.423463    4045 server.go:1190] "Started kubelet"
	Aug 10 22:54:31 pause-20210810225233-345780 systemd[1]: kubelet.service: Succeeded.
	Aug 10 22:54:31 pause-20210810225233-345780 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [26755da5fc303695151f4bc4c6a1a7cfc72a35d22b68b11aa6bd432fd519247d] <==
	* I0810 22:54:23.661988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0810 22:54:23.671948       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0810 22:54:23.672008       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0810 22:54:23.685630       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0810 22:54:23.685819       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210810225233-345780_45d49143-1b99-4ab6-a806-f691e591097c!
	I0810 22:54:23.685827       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6761ff01-40ae-48fe-9365-e1e10384b78a", APIVersion:"v1", ResourceVersion:"515", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210810225233-345780_45d49143-1b99-4ab6-a806-f691e591097c became leader
	I0810 22:54:23.786938       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210810225233-345780_45d49143-1b99-4ab6-a806-f691e591097c!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 22:54:58.217653  543569 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/PauseAgain (34.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1716.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210810225510-345780 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-20210810225510-345780 --alsologtostderr -v=3: signal: killed (28m32.691540475s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-20210810225510-345780"  ...
	* Powering off "embed-certs-20210810225510-345780" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:56:37.450726  555123 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:56:37.451166  555123 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:56:37.451181  555123 out.go:311] Setting ErrFile to fd 2...
	I0810 22:56:37.451188  555123 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:56:37.451449  555123 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:56:37.451763  555123 out.go:305] Setting JSON to false
	I0810 22:56:37.451887  555123 mustload.go:65] Loading cluster: embed-certs-20210810225510-345780
	I0810 22:56:37.452810  555123 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210810225510-345780/config.json ...
	I0810 22:56:37.453143  555123 mustload.go:65] Loading cluster: embed-certs-20210810225510-345780
	I0810 22:56:37.453282  555123 stop.go:39] StopHost: embed-certs-20210810225510-345780
	I0810 22:56:37.455854  555123 out.go:177] * Stopping node "embed-certs-20210810225510-345780"  ...
	I0810 22:56:37.455945  555123 cli_runner.go:115] Run: docker container inspect embed-certs-20210810225510-345780 --format={{.State.Status}}
	I0810 22:56:37.513800  555123 out.go:177] * Powering off "embed-certs-20210810225510-345780" via SSH ...
	I0810 22:56:37.513878  555123 cli_runner.go:115] Run: docker exec --privileged -t embed-certs-20210810225510-345780 /bin/bash -c "sudo init 0"
	I0810 22:56:38.686850  555123 cli_runner.go:115] Run: docker container inspect embed-certs-20210810225510-345780 --format={{.State.Status}}
	I0810 22:56:38.729581  555123 oci.go:646] temporary error: container embed-certs-20210810225510-345780 status is Running but expect it to be exited
	I0810 22:56:38.729650  555123 oci.go:652] Successfully shutdown container embed-certs-20210810225510-345780
	I0810 22:56:38.729659  555123 stop.go:88] shutdown container: err=<nil>
	I0810 22:56:38.729722  555123 main.go:130] libmachine: Stopping "embed-certs-20210810225510-345780"...
	I0810 22:56:38.729800  555123 cli_runner.go:115] Run: docker container inspect embed-certs-20210810225510-345780 --format={{.State.Status}}
	I0810 22:56:38.772340  555123 kic_runner.go:94] Run: systemctl --version
	I0810 22:56:38.772365  555123 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210810225510-345780 systemctl --version]
	I0810 22:56:38.904226  555123 kic_runner.go:94] Run: sudo systemctl stop kubelet
	I0810 22:56:38.904253  555123 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210810225510-345780 sudo systemctl stop kubelet]
	I0810 22:56:39.036422  555123 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0810 22:56:39.036557  555123 kic_runner.go:94] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0810 22:56:39.036570  555123 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210810225510-345780 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I0810 22:56:47.222859  555123 kic.go:456] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
	stdout:
	
	stderr:
	time="2021-08-10T22:56:41Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-10T22:56:43Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-10T22:56:45Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-10T22:56:47Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0810 22:56:47.222886  555123 kic.go:466] successfully stopped kubernetes!
	I0810 22:56:47.222950  555123 kic_runner.go:94] Run: pgrep kube-apiserver
	I0810 22:56:47.222961  555123 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210810225510-345780 pgrep kube-apiserver]

                                                
                                                
** /stderr **
start_stop_delete_test.go:203: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-20210810225510-345780 --alsologtostderr -v=3" : signal: killed
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210810225510-345780
helpers_test.go:236: (dbg) docker inspect embed-certs-20210810225510-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "be590f181c6635dbec77c1e5a90270ae4d9f943be5339535ed53a35d2eefdb08",
	        "Created": "2021-08-10T22:55:11.841656856Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 547542,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-10T22:55:12.339689311Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/be590f181c6635dbec77c1e5a90270ae4d9f943be5339535ed53a35d2eefdb08/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/be590f181c6635dbec77c1e5a90270ae4d9f943be5339535ed53a35d2eefdb08/hostname",
	        "HostsPath": "/var/lib/docker/containers/be590f181c6635dbec77c1e5a90270ae4d9f943be5339535ed53a35d2eefdb08/hosts",
	        "LogPath": "/var/lib/docker/containers/be590f181c6635dbec77c1e5a90270ae4d9f943be5339535ed53a35d2eefdb08/be590f181c6635dbec77c1e5a90270ae4d9f943be5339535ed53a35d2eefdb08-json.log",
	        "Name": "/embed-certs-20210810225510-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210810225510-345780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210810225510-345780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c2c4a73d91b9cbe732c3e6795ca45b13dedb5cd1d4319ab7d23b9755df243517-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c2c4a73d91b9cbe732c3e6795ca45b13dedb5cd1d4319ab7d23b9755df243517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c2c4a73d91b9cbe732c3e6795ca45b13dedb5cd1d4319ab7d23b9755df243517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c2c4a73d91b9cbe732c3e6795ca45b13dedb5cd1d4319ab7d23b9755df243517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210810225510-345780",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210810225510-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210810225510-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210810225510-345780",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210810225510-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7902a8fcb72ac156989ea2f86e8f55b2d55757b24864c9ae263c609ea823ebe9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7902a8fcb72a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210810225510-345780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "be590f181c66"
	                    ],
	                    "NetworkID": "b87adee7cb5ec9f9d53decdab8bb054af8d61f3427c0d63f3ed16710af71ca49",
	                    "EndpointID": "bd1e60e003d32e749edf79f21ff095649490821b6a280217f0634fa476926743",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210810225510-345780 -n embed-certs-20210810225510-345780
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210810225510-345780 -n embed-certs-20210810225510-345780: exit status 3 (3.331210984s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 23:25:13.461319  573652 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:48818->127.0.0.1:33169: read: connection reset by peer
	E0810 23:25:13.461341  573652 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:48818->127.0.0.1:33169: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "embed-certs-20210810225510-345780" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (1716.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1662.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210810225439-345780 --alsologtostderr -v=3
E0810 22:57:13.664829  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:57:59.310257  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 23:02:13.665149  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 23:02:59.310080  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 23:06:02.355987  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 23:07:13.665488  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-20210810225439-345780 --alsologtostderr -v=3: signal: killed (27m38.953260221s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-20210810225439-345780"  ...
	* Powering off "no-preload-20210810225439-345780" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:57:00.971579  558014 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:57:00.971693  558014 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:57:00.971703  558014 out.go:311] Setting ErrFile to fd 2...
	I0810 22:57:00.971710  558014 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:57:00.971871  558014 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:57:00.972095  558014 out.go:305] Setting JSON to false
	I0810 22:57:00.972199  558014 mustload.go:65] Loading cluster: no-preload-20210810225439-345780
	I0810 22:57:00.972697  558014 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210810225439-345780/config.json ...
	I0810 22:57:00.972885  558014 mustload.go:65] Loading cluster: no-preload-20210810225439-345780
	I0810 22:57:00.973040  558014 stop.go:39] StopHost: no-preload-20210810225439-345780
	I0810 22:57:00.976179  558014 out.go:177] * Stopping node "no-preload-20210810225439-345780"  ...
	I0810 22:57:00.976278  558014 cli_runner.go:115] Run: docker container inspect no-preload-20210810225439-345780 --format={{.State.Status}}
	I0810 22:57:01.032559  558014 out.go:177] * Powering off "no-preload-20210810225439-345780" via SSH ...
	I0810 22:57:01.032662  558014 cli_runner.go:115] Run: docker exec --privileged -t no-preload-20210810225439-345780 /bin/bash -c "sudo init 0"
	I0810 22:57:02.204488  558014 cli_runner.go:115] Run: docker container inspect no-preload-20210810225439-345780 --format={{.State.Status}}
	I0810 22:57:02.250023  558014 oci.go:646] temporary error: container no-preload-20210810225439-345780 status is Running but expect it to be exited
	I0810 22:57:02.250067  558014 oci.go:652] Successfully shutdown container no-preload-20210810225439-345780
	I0810 22:57:02.250078  558014 stop.go:88] shutdown container: err=<nil>
	I0810 22:57:02.250133  558014 main.go:130] libmachine: Stopping "no-preload-20210810225439-345780"...
	I0810 22:57:02.250211  558014 cli_runner.go:115] Run: docker container inspect no-preload-20210810225439-345780 --format={{.State.Status}}
	I0810 22:57:02.292435  558014 kic_runner.go:94] Run: systemctl --version
	I0810 22:57:02.292455  558014 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210810225439-345780 systemctl --version]
	I0810 22:57:02.427612  558014 kic_runner.go:94] Run: sudo systemctl stop kubelet
	I0810 22:57:02.427636  558014 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210810225439-345780 sudo systemctl stop kubelet]
	I0810 22:57:02.552521  558014 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0810 22:57:02.552652  558014 kic_runner.go:94] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0810 22:57:02.552667  558014 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210810225439-345780 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I0810 22:57:10.745836  558014 kic.go:456] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
	stdout:
	
	stderr:
	time="2021-08-10T22:57:04Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-10T22:57:06Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-10T22:57:08Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-10T22:57:10Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0810 22:57:10.745866  558014 kic.go:466] successfully stopped kubernetes!
	I0810 22:57:10.745927  558014 kic_runner.go:94] Run: pgrep kube-apiserver
	I0810 22:57:10.745940  558014 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210810225439-345780 pgrep kube-apiserver]

                                                
                                                
** /stderr **
start_stop_delete_test.go:203: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-20210810225439-345780 --alsologtostderr -v=3" : signal: killed
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210810225439-345780
helpers_test.go:236: (dbg) docker inspect no-preload-20210810225439-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "97660abbfeca04f72f75fd6b00dd4217558ca67c5e658534fad0c29b903d342d",
	        "Created": "2021-08-10T22:54:42.081351523Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 542547,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-10T22:54:42.689998308Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/97660abbfeca04f72f75fd6b00dd4217558ca67c5e658534fad0c29b903d342d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/97660abbfeca04f72f75fd6b00dd4217558ca67c5e658534fad0c29b903d342d/hostname",
	        "HostsPath": "/var/lib/docker/containers/97660abbfeca04f72f75fd6b00dd4217558ca67c5e658534fad0c29b903d342d/hosts",
	        "LogPath": "/var/lib/docker/containers/97660abbfeca04f72f75fd6b00dd4217558ca67c5e658534fad0c29b903d342d/97660abbfeca04f72f75fd6b00dd4217558ca67c5e658534fad0c29b903d342d-json.log",
	        "Name": "/no-preload-20210810225439-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210810225439-345780:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210810225439-345780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4273051553d32f5d0b1a0f3bdd6dc5d773325eb2da1b9786019571c942787800-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4273051553d32f5d0b1a0f3bdd6dc5d773325eb2da1b9786019571c942787800/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4273051553d32f5d0b1a0f3bdd6dc5d773325eb2da1b9786019571c942787800/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4273051553d32f5d0b1a0f3bdd6dc5d773325eb2da1b9786019571c942787800/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210810225439-345780",
	                "Source": "/var/lib/docker/volumes/no-preload-20210810225439-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210810225439-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210810225439-345780",
	                "name.minikube.sigs.k8s.io": "no-preload-20210810225439-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "043df574adde8d867e7f6c91066384a030626ec7a3aea3025ced6dbf5bd169d7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/043df574adde",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210810225439-345780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "97660abbfeca"
	                    ],
	                    "NetworkID": "aac5483000b1c37ec4c36d031d1eb4fb92e0b5e4e1af8e79e807df765aacf373",
	                    "EndpointID": "51874c7d2f02881bdc0b4cd7fc63b67e9d5684c61d8dc432c3449a01dbd65862",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210810225439-345780 -n no-preload-20210810225439-345780
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210810225439-345780 -n no-preload-20210810225439-345780: exit status 3 (3.345620395s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 23:24:43.254646  573426 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44162->127.0.0.1:33164: read: connection reset by peer
	E0810 23:24:43.254672  573426 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44162->127.0.0.1:33164: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "no-preload-20210810225439-345780" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (1662.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (1724.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210810230738-345780 --alsologtostderr -v=3
E0810 23:10:16.711288  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 23:11:13.893180  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:13.898525  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:13.908768  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:13.929104  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:13.969425  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:14.049790  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:14.210250  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:14.530873  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:15.171829  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:16.452329  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:19.013208  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:24.134248  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:34.374539  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:11:54.855118  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:12:13.664910  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 23:12:35.815604  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:12:59.310562  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 23:13:57.737433  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:16:13.893013  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:16:41.578788  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:17:13.664863  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 23:17:59.310176  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 23:21:13.893342  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
E0810 23:22:13.665490  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 23:22:42.356822  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 23:22:59.309755  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210810230738-345780 --alsologtostderr -v=3: signal: killed (28m40.826483094s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-different-port-20210810230738-345780"  ...
	* Powering off "default-k8s-different-port-20210810230738-345780" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 23:08:57.923543  572996 out.go:298] Setting OutFile to fd 1 ...
	I0810 23:08:57.923656  572996 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 23:08:57.923662  572996 out.go:311] Setting ErrFile to fd 2...
	I0810 23:08:57.923666  572996 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 23:08:57.923828  572996 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 23:08:57.924065  572996 out.go:305] Setting JSON to false
	I0810 23:08:57.924185  572996 mustload.go:65] Loading cluster: default-k8s-different-port-20210810230738-345780
	I0810 23:08:57.924687  572996 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210810230738-345780/config.json ...
	I0810 23:08:57.924896  572996 mustload.go:65] Loading cluster: default-k8s-different-port-20210810230738-345780
	I0810 23:08:57.925105  572996 stop.go:39] StopHost: default-k8s-different-port-20210810230738-345780
	I0810 23:08:57.933017  572996 out.go:177] * Stopping node "default-k8s-different-port-20210810230738-345780"  ...
	I0810 23:08:57.933147  572996 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210810230738-345780 --format={{.State.Status}}
	I0810 23:08:57.988519  572996 out.go:177] * Powering off "default-k8s-different-port-20210810230738-345780" via SSH ...
	I0810 23:08:57.988626  572996 cli_runner.go:115] Run: docker exec --privileged -t default-k8s-different-port-20210810230738-345780 /bin/bash -c "sudo init 0"
	I0810 23:08:59.132817  572996 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210810230738-345780 --format={{.State.Status}}
	I0810 23:08:59.179097  572996 oci.go:646] temporary error: container default-k8s-different-port-20210810230738-345780 status is Running but expect it to be exited
	I0810 23:08:59.179144  572996 oci.go:652] Successfully shutdown container default-k8s-different-port-20210810230738-345780
	I0810 23:08:59.179152  572996 stop.go:88] shutdown container: err=<nil>
	I0810 23:08:59.179200  572996 main.go:130] libmachine: Stopping "default-k8s-different-port-20210810230738-345780"...
	I0810 23:08:59.179275  572996 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210810230738-345780 --format={{.State.Status}}
	I0810 23:08:59.222702  572996 kic_runner.go:94] Run: systemctl --version
	I0810 23:08:59.222731  572996 kic_runner.go:115] Args: [docker exec --privileged default-k8s-different-port-20210810230738-345780 systemctl --version]
	I0810 23:08:59.360523  572996 kic_runner.go:94] Run: sudo systemctl stop kubelet
	I0810 23:08:59.360548  572996 kic_runner.go:115] Args: [docker exec --privileged default-k8s-different-port-20210810230738-345780 sudo systemctl stop kubelet]
	I0810 23:08:59.500419  572996 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0810 23:08:59.500543  572996 kic_runner.go:94] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0810 23:08:59.500559  572996 kic_runner.go:115] Args: [docker exec --privileged default-k8s-different-port-20210810230738-345780 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I0810 23:09:07.684895  572996 kic.go:456] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
	stdout:
	
	stderr:
	time="2021-08-10T23:09:01Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-10T23:09:03Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-10T23:09:05Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-10T23:09:07Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0810 23:09:07.684961  572996 kic.go:466] successfully stopped kubernetes!
	I0810 23:09:07.685035  572996 kic_runner.go:94] Run: pgrep kube-apiserver
	I0810 23:09:07.685049  572996 kic_runner.go:115] Args: [docker exec --privileged default-k8s-different-port-20210810230738-345780 pgrep kube-apiserver]

                                                
                                                
** /stderr **
start_stop_delete_test.go:203: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-different-port-20210810230738-345780 --alsologtostderr -v=3" : signal: killed
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Stop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20210810230738-345780
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20210810230738-345780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd04a71ec58af1c498051478cca4815da779717a2c092569753582715dff23bb",
	        "Created": "2021-08-10T23:07:40.297303764Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 569237,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-10T23:07:40.765758664Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/fd04a71ec58af1c498051478cca4815da779717a2c092569753582715dff23bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd04a71ec58af1c498051478cca4815da779717a2c092569753582715dff23bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd04a71ec58af1c498051478cca4815da779717a2c092569753582715dff23bb/hosts",
	        "LogPath": "/var/lib/docker/containers/fd04a71ec58af1c498051478cca4815da779717a2c092569753582715dff23bb/fd04a71ec58af1c498051478cca4815da779717a2c092569753582715dff23bb-json.log",
	        "Name": "/default-k8s-different-port-20210810230738-345780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-different-port-20210810230738-345780:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20210810230738-345780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/618715f4bebd879b2e6ba0736b79ce53bcb3769e2e9bcea0c0b7b40c309d9394-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/618715f4bebd879b2e6ba0736b79ce53bcb3769e2e9bcea0c0b7b40c309d9394/merged",
	                "UpperDir": "/var/lib/docker/overlay2/618715f4bebd879b2e6ba0736b79ce53bcb3769e2e9bcea0c0b7b40c309d9394/diff",
	                "WorkDir": "/var/lib/docker/overlay2/618715f4bebd879b2e6ba0736b79ce53bcb3769e2e9bcea0c0b7b40c309d9394/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210810230738-345780",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210810230738-345780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210810230738-345780",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210810230738-345780",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210810230738-345780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c80ca055157c6ff7c2771550e49ae5434fbbdc47c588320779d898a1c6c44abc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c80ca055157c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20210810230738-345780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "fd04a71ec58a"
	                    ],
	                    "NetworkID": "bbd8860751ea5575d329fd1cd5a50c553eaf293821fd930785d61bef270aae40",
	                    "EndpointID": "580b8ece3f2daa107ff440ab050a3c40a4e5fd74f0d38dc27c7e7527244245ff",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210810230738-345780 -n default-k8s-different-port-20210810230738-345780
E0810 23:37:39.400890  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210810230738-345780 -n default-k8s-different-port-20210810230738-345780: exit status 3 (3.329573508s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 23:37:42.073890  618773 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43928->127.0.0.1:33179: read: connection reset by peer
	E0810 23:37:42.073912  618773 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43928->127.0.0.1:33179: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "default-k8s-different-port-20210810230738-345780" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Stop (1724.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210810225249-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20210810225249-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=crio: exit status 80 (1m1.356324491s)

                                                
                                                
-- stdout --
	* [calico-20210810225249-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node calico-20210810225249-345780 in cluster calico-20210810225249-345780
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 23:29:57.656595  596855 out.go:298] Setting OutFile to fd 1 ...
	I0810 23:29:57.656685  596855 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 23:29:57.656690  596855 out.go:311] Setting ErrFile to fd 2...
	I0810 23:29:57.656695  596855 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 23:29:57.656832  596855 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 23:29:57.657345  596855 out.go:305] Setting JSON to false
	I0810 23:29:57.697490  596855 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":11559,"bootTime":1628626639,"procs":268,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 23:29:57.697625  596855 start.go:121] virtualization: kvm guest
	I0810 23:29:57.700831  596855 out.go:177] * [calico-20210810225249-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 23:29:57.701057  596855 notify.go:169] Checking for updates...
	I0810 23:29:57.702628  596855 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 23:29:57.704210  596855 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 23:29:57.705722  596855 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 23:29:57.707196  596855 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 23:29:57.708272  596855 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 23:29:57.767265  596855 docker.go:132] docker version: linux-19.03.15
	I0810 23:29:57.767454  596855 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 23:29:57.863187  596855 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:78 SystemTime:2021-08-10 23:29:57.815901854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 23:29:57.863302  596855 docker.go:244] overlay module found
	I0810 23:29:57.865380  596855 out.go:177] * Using the docker driver based on user configuration
	I0810 23:29:57.865412  596855 start.go:278] selected driver: docker
	I0810 23:29:57.865421  596855 start.go:751] validating driver "docker" against <nil>
	I0810 23:29:57.865446  596855 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0810 23:29:57.865526  596855 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 23:29:57.865549  596855 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0810 23:29:57.867254  596855 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 23:29:57.868156  596855 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 23:29:57.960555  596855 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:78 SystemTime:2021-08-10 23:29:57.9131105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 23:29:57.960692  596855 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0810 23:29:57.960879  596855 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0810 23:29:57.960911  596855 cni.go:93] Creating CNI manager for "calico"
	I0810 23:29:57.960974  596855 start_flags.go:272] Found "Calico" CNI - setting NetworkPlugin=cni
	I0810 23:29:57.960987  596855 start_flags.go:277] config:
	{Name:calico-20210810225249-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210810225249-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 23:29:57.963271  596855 out.go:177] * Starting control plane node calico-20210810225249-345780 in cluster calico-20210810225249-345780
	I0810 23:29:57.963321  596855 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 23:29:57.964891  596855 out.go:177] * Pulling base image ...
	I0810 23:29:57.965056  596855 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 23:29:57.965086  596855 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0810 23:29:57.965107  596855 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 23:29:57.965127  596855 cache.go:56] Caching tarball of preloaded images
	I0810 23:29:57.965310  596855 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 23:29:57.965323  596855 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0810 23:29:57.965492  596855 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/config.json ...
	I0810 23:29:57.965516  596855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/config.json: {Name:mk858d5ba3705d7a62a0577e223491be679fcdef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 23:29:58.067343  596855 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 23:29:58.067377  596855 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 23:29:58.067395  596855 cache.go:205] Successfully downloaded all kic artifacts
	I0810 23:29:58.067446  596855 start.go:313] acquiring machines lock for calico-20210810225249-345780: {Name:mkcde65f0e2c2f761190a193deafb25579c12528 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 23:29:58.067597  596855 start.go:317] acquired machines lock for "calico-20210810225249-345780" in 112.108µs
	I0810 23:29:58.067628  596855 start.go:89] Provisioning new machine with config: &{Name:calico-20210810225249-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210810225249-345780 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 23:29:58.067721  596855 start.go:126] createHost starting for "" (driver="docker")
	I0810 23:29:58.070330  596855 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0810 23:29:58.070633  596855 start.go:160] libmachine.API.Create for "calico-20210810225249-345780" (driver="docker")
	I0810 23:29:58.070682  596855 client.go:168] LocalClient.Create starting
	I0810 23:29:58.070756  596855 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 23:29:58.070794  596855 main.go:130] libmachine: Decoding PEM data...
	I0810 23:29:58.070825  596855 main.go:130] libmachine: Parsing certificate...
	I0810 23:29:58.070981  596855 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 23:29:58.071008  596855 main.go:130] libmachine: Decoding PEM data...
	I0810 23:29:58.071027  596855 main.go:130] libmachine: Parsing certificate...
	I0810 23:29:58.071387  596855 cli_runner.go:115] Run: docker network inspect calico-20210810225249-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0810 23:29:58.117042  596855 cli_runner.go:162] docker network inspect calico-20210810225249-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0810 23:29:58.117176  596855 network_create.go:255] running [docker network inspect calico-20210810225249-345780] to gather additional debugging logs...
	I0810 23:29:58.117205  596855 cli_runner.go:115] Run: docker network inspect calico-20210810225249-345780
	W0810 23:29:58.168299  596855 cli_runner.go:162] docker network inspect calico-20210810225249-345780 returned with exit code 1
	I0810 23:29:58.168337  596855 network_create.go:258] error running [docker network inspect calico-20210810225249-345780]: docker network inspect calico-20210810225249-345780: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20210810225249-345780
	I0810 23:29:58.168503  596855 network_create.go:260] output of [docker network inspect calico-20210810225249-345780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20210810225249-345780
	
	** /stderr **
	I0810 23:29:58.168598  596855 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 23:29:58.215270  596855 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b87adee7cb5e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3e:b3:e6:10}}
	I0810 23:29:58.216170  596855 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-08818fa49fb2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:72:b1:84:3c}}
	I0810 23:29:58.217289  596855 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-bbd8860751ea IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:5b:21:24:9d}}
	I0810 23:29:58.219258  596855 network.go:240] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-aac5483000b1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:07:0e:84:92}}
	I0810 23:29:58.220389  596855 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.85.0:0xc00068c030] misses:0}
	I0810 23:29:58.220428  596855 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0810 23:29:58.220444  596855 network_create.go:106] attempt to create docker network calico-20210810225249-345780 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0810 23:29:58.220497  596855 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210810225249-345780
	I0810 23:29:58.299329  596855 network_create.go:90] docker network calico-20210810225249-345780 192.168.85.0/24 created
	I0810 23:29:58.299377  596855 kic.go:106] calculated static IP "192.168.85.2" for the "calico-20210810225249-345780" container
	I0810 23:29:58.299440  596855 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0810 23:29:58.345235  596855 cli_runner.go:115] Run: docker volume create calico-20210810225249-345780 --label name.minikube.sigs.k8s.io=calico-20210810225249-345780 --label created_by.minikube.sigs.k8s.io=true
	I0810 23:29:58.389794  596855 oci.go:102] Successfully created a docker volume calico-20210810225249-345780
	I0810 23:29:58.389919  596855 cli_runner.go:115] Run: docker run --rm --name calico-20210810225249-345780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210810225249-345780 --entrypoint /usr/bin/test -v calico-20210810225249-345780:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0810 23:29:59.215906  596855 oci.go:106] Successfully prepared a docker volume calico-20210810225249-345780
	W0810 23:29:59.215977  596855 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0810 23:29:59.215985  596855 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0810 23:29:59.216048  596855 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0810 23:29:59.216070  596855 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 23:29:59.216115  596855 kic.go:179] Starting extracting preloaded images to volume ...
	I0810 23:29:59.216220  596855 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210810225249-345780:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0810 23:29:59.333925  596855 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210810225249-345780 --name calico-20210810225249-345780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210810225249-345780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210810225249-345780 --network calico-20210810225249-345780 --ip 192.168.85.2 --volume calico-20210810225249-345780:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0810 23:29:59.883870  596855 cli_runner.go:115] Run: docker container inspect calico-20210810225249-345780 --format={{.State.Running}}
	I0810 23:29:59.939110  596855 cli_runner.go:115] Run: docker container inspect calico-20210810225249-345780 --format={{.State.Status}}
	I0810 23:29:59.995258  596855 cli_runner.go:115] Run: docker exec calico-20210810225249-345780 stat /var/lib/dpkg/alternatives/iptables
	I0810 23:30:00.137836  596855 oci.go:278] the created container "calico-20210810225249-345780" has a running status.
	I0810 23:30:00.137878  596855 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225249-345780/id_rsa...
	I0810 23:30:00.279271  596855 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225249-345780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0810 23:30:00.735454  596855 cli_runner.go:115] Run: docker container inspect calico-20210810225249-345780 --format={{.State.Status}}
	I0810 23:30:00.787781  596855 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0810 23:30:00.787808  596855 kic_runner.go:115] Args: [docker exec --privileged calico-20210810225249-345780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0810 23:30:03.393300  596855 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210810225249-345780:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.177025195s)
	I0810 23:30:03.393338  596855 kic.go:188] duration metric: took 4.177220 seconds to extract preloaded images to volume
	I0810 23:30:03.393426  596855 cli_runner.go:115] Run: docker container inspect calico-20210810225249-345780 --format={{.State.Status}}
	I0810 23:30:03.438013  596855 machine.go:88] provisioning docker machine ...
	I0810 23:30:03.438068  596855 ubuntu.go:169] provisioning hostname "calico-20210810225249-345780"
	I0810 23:30:03.438144  596855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210810225249-345780
	I0810 23:30:03.490646  596855 main.go:130] libmachine: Using SSH client type: native
	I0810 23:30:03.490857  596855 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I0810 23:30:03.490879  596855 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20210810225249-345780 && echo "calico-20210810225249-345780" | sudo tee /etc/hostname
	I0810 23:30:03.638884  596855 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20210810225249-345780
	
	I0810 23:30:03.639010  596855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210810225249-345780
	I0810 23:30:03.694789  596855 main.go:130] libmachine: Using SSH client type: native
	I0810 23:30:03.694952  596855 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I0810 23:30:03.694974  596855 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20210810225249-345780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210810225249-345780/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20210810225249-345780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 23:30:03.809285  596855 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 23:30:03.809335  596855 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 23:30:03.809369  596855 ubuntu.go:177] setting up certificates
	I0810 23:30:03.809382  596855 provision.go:83] configureAuth start
	I0810 23:30:03.809450  596855 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210810225249-345780
	I0810 23:30:03.856503  596855 provision.go:137] copyHostCerts
	I0810 23:30:03.856585  596855 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 23:30:03.856593  596855 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 23:30:03.856653  596855 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 23:30:03.856736  596855 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 23:30:03.856747  596855 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 23:30:03.856767  596855 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 23:30:03.856811  596855 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 23:30:03.856818  596855 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 23:30:03.856836  596855 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 23:30:03.856899  596855 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.calico-20210810225249-345780 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20210810225249-345780]
	I0810 23:30:03.924665  596855 provision.go:171] copyRemoteCerts
	I0810 23:30:03.924748  596855 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 23:30:03.924809  596855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210810225249-345780
	I0810 23:30:03.979798  596855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225249-345780/id_rsa Username:docker}
	I0810 23:30:04.065122  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 23:30:04.083367  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0810 23:30:04.104771  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0810 23:30:04.123881  596855 provision.go:86] duration metric: configureAuth took 314.480639ms
	I0810 23:30:04.123922  596855 ubuntu.go:193] setting minikube options for container-runtime
	I0810 23:30:04.124204  596855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210810225249-345780
	I0810 23:30:04.171125  596855 main.go:130] libmachine: Using SSH client type: native
	I0810 23:30:04.171355  596855 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I0810 23:30:04.171386  596855 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 23:30:04.527060  596855 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 23:30:04.527096  596855 machine.go:91] provisioned docker machine in 1.089049472s
	I0810 23:30:04.527108  596855 client.go:171] LocalClient.Create took 6.456415897s
	I0810 23:30:04.527128  596855 start.go:168] duration metric: libmachine.API.Create for "calico-20210810225249-345780" took 6.456493993s
	I0810 23:30:04.527139  596855 start.go:267] post-start starting for "calico-20210810225249-345780" (driver="docker")
	I0810 23:30:04.527145  596855 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 23:30:04.527203  596855 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 23:30:04.527246  596855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210810225249-345780
	I0810 23:30:04.572220  596855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225249-345780/id_rsa Username:docker}
	I0810 23:30:04.661093  596855 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 23:30:04.664245  596855 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0810 23:30:04.664279  596855 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0810 23:30:04.664291  596855 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0810 23:30:04.664299  596855 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0810 23:30:04.664319  596855 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 23:30:04.664387  596855 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 23:30:04.664508  596855 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem -> 3457802.pem in /etc/ssl/certs
	I0810 23:30:04.664684  596855 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 23:30:04.673134  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 23:30:04.691213  596855 start.go:270] post-start completed in 164.058404ms
	I0810 23:30:04.691579  596855 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210810225249-345780
	I0810 23:30:04.735977  596855 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/config.json ...
	I0810 23:30:04.736238  596855 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 23:30:04.736289  596855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210810225249-345780
	I0810 23:30:04.779140  596855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225249-345780/id_rsa Username:docker}
	I0810 23:30:04.861852  596855 start.go:129] duration metric: createHost completed in 6.794113657s
	I0810 23:30:04.861882  596855 start.go:80] releasing machines lock for "calico-20210810225249-345780", held for 6.794270944s
	I0810 23:30:04.862010  596855 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210810225249-345780
	I0810 23:30:04.906251  596855 ssh_runner.go:149] Run: systemctl --version
	I0810 23:30:04.906308  596855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210810225249-345780
	I0810 23:30:04.906314  596855 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 23:30:04.906400  596855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210810225249-345780
	I0810 23:30:04.953490  596855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225249-345780/id_rsa Username:docker}
	I0810 23:30:04.957358  596855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225249-345780/id_rsa Username:docker}
	I0810 23:30:05.083937  596855 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 23:30:05.104385  596855 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 23:30:05.115207  596855 docker.go:153] disabling docker service ...
	I0810 23:30:05.115277  596855 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 23:30:05.129828  596855 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 23:30:05.139064  596855 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 23:30:05.217071  596855 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 23:30:05.289898  596855 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 23:30:05.299996  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 23:30:05.313824  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0810 23:30:05.322882  596855 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 23:30:05.329825  596855 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 23:30:05.329891  596855 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 23:30:05.337724  596855 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 23:30:05.347756  596855 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 23:30:05.408485  596855 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 23:30:05.418384  596855 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 23:30:05.418447  596855 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 23:30:05.421666  596855 start.go:417] Will wait 60s for crictl version
	I0810 23:30:05.421721  596855 ssh_runner.go:149] Run: sudo crictl version
	I0810 23:30:05.450887  596855 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0810 23:30:05.450977  596855 ssh_runner.go:149] Run: crio --version
	I0810 23:30:05.518111  596855 ssh_runner.go:149] Run: crio --version
	I0810 23:30:05.599923  596855 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0810 23:30:05.600025  596855 cli_runner.go:115] Run: docker network inspect calico-20210810225249-345780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0810 23:30:05.646735  596855 ssh_runner.go:149] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0810 23:30:05.650650  596855 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 23:30:05.661298  596855 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 23:30:05.661383  596855 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 23:30:05.709577  596855 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 23:30:05.709603  596855 crio.go:333] Images already preloaded, skipping extraction
	I0810 23:30:05.709656  596855 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 23:30:05.735045  596855 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 23:30:05.735076  596855 cache_images.go:74] Images are preloaded, skipping loading
	I0810 23:30:05.735141  596855 ssh_runner.go:149] Run: crio config
	I0810 23:30:05.809872  596855 cni.go:93] Creating CNI manager for "calico"
	I0810 23:30:05.809901  596855 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 23:30:05.809914  596855 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210810225249-345780 NodeName:calico-20210810225249-345780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 23:30:05.810097  596855 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "calico-20210810225249-345780"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 23:30:05.810214  596855 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=calico-20210810225249-345780 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:calico-20210810225249-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0810 23:30:05.810285  596855 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0810 23:30:05.818246  596855 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 23:30:05.818326  596855 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0810 23:30:05.825854  596855 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0810 23:30:05.839641  596855 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 23:30:05.853667  596855 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0810 23:30:05.868017  596855 ssh_runner.go:149] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0810 23:30:05.871502  596855 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 23:30:05.882324  596855 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780 for IP: 192.168.85.2
	I0810 23:30:05.882402  596855 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 23:30:05.882427  596855 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 23:30:05.882505  596855 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/client.key
	I0810 23:30:05.882525  596855 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/client.crt with IP's: []
	I0810 23:30:06.265838  596855 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/client.crt ...
	I0810 23:30:06.265875  596855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/client.crt: {Name:mk9438fa46a82a8cf5e48e6fc7c1320684012ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 23:30:06.266104  596855 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/client.key ...
	I0810 23:30:06.266124  596855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/client.key: {Name:mk41cac02dc15ac9cf77db29f54e363eb31b649a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 23:30:06.266240  596855 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.key.43b9df8c
	I0810 23:30:06.266258  596855 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0810 23:30:06.479935  596855 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.crt.43b9df8c ...
	I0810 23:30:06.479977  596855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.crt.43b9df8c: {Name:mk113918ec6a830ad61dfe136b616ae2cdaf4257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 23:30:06.480183  596855 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.key.43b9df8c ...
	I0810 23:30:06.480198  596855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.key.43b9df8c: {Name:mk0e6d6b2e20394c18ee39100f2ab89ffa08ec90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 23:30:06.480275  596855 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.crt
	I0810 23:30:06.480382  596855 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.key
	I0810 23:30:06.480437  596855 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/proxy-client.key
	I0810 23:30:06.480446  596855 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/proxy-client.crt with IP's: []
	I0810 23:30:06.614004  596855 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/proxy-client.crt ...
	I0810 23:30:06.614038  596855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/proxy-client.crt: {Name:mkaf2eb8a2855d6d4fad3e609e0c9ebe2015c395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 23:30:06.614238  596855 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/proxy-client.key ...
	I0810 23:30:06.614252  596855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/proxy-client.key: {Name:mk6bc6dbe2bf66652fc34185c597edaa08cfd394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 23:30:06.614423  596855 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem (1338 bytes)
	W0810 23:30:06.614464  596855 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780_empty.pem, impossibly tiny 0 bytes
	I0810 23:30:06.614476  596855 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0810 23:30:06.614501  596855 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 23:30:06.614526  596855 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 23:30:06.614549  596855 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 23:30:06.614591  596855 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem (1708 bytes)
	I0810 23:30:06.615598  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0810 23:30:06.659348  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0810 23:30:06.678711  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0810 23:30:06.697169  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225249-345780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0810 23:30:06.715112  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 23:30:06.733245  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 23:30:06.751467  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 23:30:06.770364  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0810 23:30:06.789008  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 23:30:06.807261  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/345780.pem --> /usr/share/ca-certificates/345780.pem (1338 bytes)
	I0810 23:30:06.824807  596855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/3457802.pem --> /usr/share/ca-certificates/3457802.pem (1708 bytes)
	I0810 23:30:06.842544  596855 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0810 23:30:06.855008  596855 ssh_runner.go:149] Run: openssl version
	I0810 23:30:06.859975  596855 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 23:30:06.868795  596855 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 23:30:06.872115  596855 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0810 23:30:06.872179  596855 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 23:30:06.877127  596855 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 23:30:06.884814  596855 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/345780.pem && ln -fs /usr/share/ca-certificates/345780.pem /etc/ssl/certs/345780.pem"
	I0810 23:30:06.892264  596855 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/345780.pem
	I0810 23:30:06.895291  596855 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:29 /usr/share/ca-certificates/345780.pem
	I0810 23:30:06.895340  596855 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/345780.pem
	I0810 23:30:06.900086  596855 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/345780.pem /etc/ssl/certs/51391683.0"
	I0810 23:30:06.907075  596855 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3457802.pem && ln -fs /usr/share/ca-certificates/3457802.pem /etc/ssl/certs/3457802.pem"
	I0810 23:30:06.914085  596855 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3457802.pem
	I0810 23:30:06.917007  596855 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:29 /usr/share/ca-certificates/3457802.pem
	I0810 23:30:06.917056  596855 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3457802.pem
	I0810 23:30:06.921702  596855 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3457802.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 23:30:06.928556  596855 kubeadm.go:390] StartCluster: {Name:calico-20210810225249-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210810225249-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 23:30:06.928648  596855 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0810 23:30:06.928698  596855 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 23:30:06.953892  596855 cri.go:76] found id: ""
	I0810 23:30:06.953985  596855 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0810 23:30:06.962044  596855 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0810 23:30:06.969403  596855 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0810 23:30:06.969463  596855 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 23:30:06.976282  596855 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 23:30:06.976334  596855 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0810 23:30:07.288341  596855 out.go:204]   - Generating certificates and keys ...
	I0810 23:30:09.697286  596855 out.go:204]   - Booting up control plane ...
	I0810 23:30:24.255235  596855 out.go:204]   - Configuring RBAC rules ...
	I0810 23:30:24.670241  596855 cni.go:93] Creating CNI manager for "calico"
	I0810 23:30:24.672239  596855 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0810 23:30:24.672345  596855 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 23:30:24.672360  596855 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22469 bytes)
	I0810 23:30:24.687730  596855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W0810 23:30:25.066754  596855 out.go:242] ! initialization failed, will try again: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	! initialization failed, will try again: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	I0810 23:30:25.066799  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0810 23:30:42.752117  596855 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (17.685292178s)
	I0810 23:30:42.752196  596855 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0810 23:30:42.764023  596855 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0810 23:30:42.764097  596855 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 23:30:42.789858  596855 cri.go:76] found id: ""
	I0810 23:30:42.789922  596855 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0810 23:30:42.789962  596855 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 23:30:42.798591  596855 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 23:30:42.798636  596855 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0810 23:30:43.130536  596855 out.go:204]   - Generating certificates and keys ...
	I0810 23:30:43.940112  596855 out.go:204]   - Booting up control plane ...
	I0810 23:30:57.488213  596855 out.go:204]   - Configuring RBAC rules ...
	I0810 23:30:57.905631  596855 cni.go:93] Creating CNI manager for "calico"
	I0810 23:30:57.908003  596855 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0810 23:30:57.908111  596855 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 23:30:57.908126  596855 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22469 bytes)
	I0810 23:30:57.922839  596855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0810 23:30:58.264588  596855 kubeadm.go:392] StartCluster complete in 51.336035227s
	I0810 23:30:58.264649  596855 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0810 23:30:58.264716  596855 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0810 23:30:58.295733  596855 cri.go:76] found id: "8871b3c9a964ebe81c01d29939002aab5a800a1e1cec631b3b8c37db53dea50a"
	I0810 23:30:58.295764  596855 cri.go:76] found id: ""
	I0810 23:30:58.295774  596855 logs.go:270] 1 containers: [8871b3c9a964ebe81c01d29939002aab5a800a1e1cec631b3b8c37db53dea50a]
	I0810 23:30:58.295829  596855 ssh_runner.go:149] Run: which crictl
	I0810 23:30:58.299523  596855 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0810 23:30:58.299596  596855 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0810 23:30:58.322417  596855 cri.go:76] found id: "e64fca1fa304dd992855b69fa2e6673e079ef55cbc3746d38d7b0dde5a9452c3"
	I0810 23:30:58.322442  596855 cri.go:76] found id: ""
	I0810 23:30:58.322450  596855 logs.go:270] 1 containers: [e64fca1fa304dd992855b69fa2e6673e079ef55cbc3746d38d7b0dde5a9452c3]
	I0810 23:30:58.322496  596855 ssh_runner.go:149] Run: which crictl
	I0810 23:30:58.325477  596855 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0810 23:30:58.325529  596855 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0810 23:30:58.348151  596855 cri.go:76] found id: ""
	I0810 23:30:58.348179  596855 logs.go:270] 0 containers: []
	W0810 23:30:58.348188  596855 logs.go:272] No container was found matching "coredns"
	I0810 23:30:58.348196  596855 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0810 23:30:58.348254  596855 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0810 23:30:58.372243  596855 cri.go:76] found id: "5aa5fc99bcf864b748e616822ddac1050c9bcbeb099c43c1591ef7390d4d99e5"
	I0810 23:30:58.372276  596855 cri.go:76] found id: ""
	I0810 23:30:58.372285  596855 logs.go:270] 1 containers: [5aa5fc99bcf864b748e616822ddac1050c9bcbeb099c43c1591ef7390d4d99e5]
	I0810 23:30:58.372352  596855 ssh_runner.go:149] Run: which crictl
	I0810 23:30:58.375567  596855 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0810 23:30:58.375631  596855 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0810 23:30:58.401064  596855 cri.go:76] found id: ""
	I0810 23:30:58.401090  596855 logs.go:270] 0 containers: []
	W0810 23:30:58.401097  596855 logs.go:272] No container was found matching "kube-proxy"
	I0810 23:30:58.401104  596855 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0810 23:30:58.401166  596855 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0810 23:30:58.429504  596855 cri.go:76] found id: ""
	I0810 23:30:58.429534  596855 logs.go:270] 0 containers: []
	W0810 23:30:58.429544  596855 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0810 23:30:58.429552  596855 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0810 23:30:58.429615  596855 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0810 23:30:58.460225  596855 cri.go:76] found id: ""
	I0810 23:30:58.460255  596855 logs.go:270] 0 containers: []
	W0810 23:30:58.460266  596855 logs.go:272] No container was found matching "storage-provisioner"
	I0810 23:30:58.460275  596855 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0810 23:30:58.460350  596855 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0810 23:30:58.494491  596855 cri.go:76] found id: "9902515e13673f7e9b910fd884319f20876c90b96c1b137e6023f1c8f8e8bfc4"
	I0810 23:30:58.494518  596855 cri.go:76] found id: ""
	I0810 23:30:58.494527  596855 logs.go:270] 1 containers: [9902515e13673f7e9b910fd884319f20876c90b96c1b137e6023f1c8f8e8bfc4]
	I0810 23:30:58.494594  596855 ssh_runner.go:149] Run: which crictl
	I0810 23:30:58.497989  596855 logs.go:123] Gathering logs for describe nodes ...
	I0810 23:30:58.498022  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0810 23:30:58.597502  596855 logs.go:123] Gathering logs for kube-scheduler [5aa5fc99bcf864b748e616822ddac1050c9bcbeb099c43c1591ef7390d4d99e5] ...
	I0810 23:30:58.597545  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5aa5fc99bcf864b748e616822ddac1050c9bcbeb099c43c1591ef7390d4d99e5"
	I0810 23:30:58.630326  596855 logs.go:123] Gathering logs for etcd [e64fca1fa304dd992855b69fa2e6673e079ef55cbc3746d38d7b0dde5a9452c3] ...
	I0810 23:30:58.630391  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e64fca1fa304dd992855b69fa2e6673e079ef55cbc3746d38d7b0dde5a9452c3"
	I0810 23:30:58.663756  596855 logs.go:123] Gathering logs for kube-controller-manager [9902515e13673f7e9b910fd884319f20876c90b96c1b137e6023f1c8f8e8bfc4] ...
	I0810 23:30:58.663795  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9902515e13673f7e9b910fd884319f20876c90b96c1b137e6023f1c8f8e8bfc4"
	I0810 23:30:58.703580  596855 logs.go:123] Gathering logs for CRI-O ...
	I0810 23:30:58.703617  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0810 23:30:58.747884  596855 logs.go:123] Gathering logs for container status ...
	I0810 23:30:58.747927  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0810 23:30:58.779148  596855 logs.go:123] Gathering logs for kubelet ...
	I0810 23:30:58.779189  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0810 23:30:58.858791  596855 logs.go:123] Gathering logs for dmesg ...
	I0810 23:30:58.858837  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0810 23:30:58.887208  596855 logs.go:123] Gathering logs for kube-apiserver [8871b3c9a964ebe81c01d29939002aab5a800a1e1cec631b3b8c37db53dea50a] ...
	I0810 23:30:58.887259  596855 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8871b3c9a964ebe81c01d29939002aab5a800a1e1cec631b3b8c37db53dea50a"
	W0810 23:30:58.949691  596855 out.go:371] Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	W0810 23:30:58.949739  596855 out.go:242] * 
	* 
	W0810 23:30:58.949938  596855 out.go:242] X Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	X Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	W0810 23:30:58.949959  596855 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0810 23:30:58.952064  596855 out.go:242] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0810 23:30:58.954315  596855 out.go:177] 
	W0810 23:30:58.954431  596855 out.go:242] X Exiting due to GUEST_START: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	X Exiting due to GUEST_START: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	W0810 23:30:58.954447  596855 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0810 23:30:58.956199  596855 out.go:242] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0810 23:30:58.957657  596855 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (61.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (900.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210810225249-345780 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-dksbd" [a72697ab-d7a8-4138-ada9-b4c427b8fe08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:32:59.309689  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:33:46.178819  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
E0810 23:33:46.184137  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
E0810 23:33:46.194414  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
E0810 23:33:46.214768  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
E0810 23:33:46.255069  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
E0810 23:33:46.335413  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
E0810 23:33:46.495802  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:33:46.816439  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
E0810 23:33:47.457500  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:33:48.738550  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:33:51.299042  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:33:56.419278  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:34:06.660420  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:34:27.141278  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:34:43.882254  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
E0810 23:34:43.887666  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
E0810 23:34:43.898040  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
E0810 23:34:43.918370  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
E0810 23:34:43.958730  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
E0810 23:34:44.039054  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
E0810 23:34:44.199549  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
E0810 23:34:44.520138  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:34:45.161088  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:34:46.442182  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:34:49.002377  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:34:54.123035  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:35:04.364013  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:35:08.101648  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:35:15.424116  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
E0810 23:35:15.429417  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
E0810 23:35:15.439689  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
E0810 23:35:15.459950  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
E0810 23:35:15.500244  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
E0810 23:35:15.580600  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:35:15.741115  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
E0810 23:35:16.061672  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:35:16.702694  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:35:17.983194  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:35:20.543657  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:35:24.844670  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:35:25.663833  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:35:35.904521  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:35:56.385070  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:36:05.805712  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:36:13.892692  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:36:29.459870  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
E0810 23:36:29.465163  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
E0810 23:36:29.475434  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
E0810 23:36:29.495690  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
E0810 23:36:29.535978  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:36:29.616370  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
E0810 23:36:29.776793  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
E0810 23:36:30.022351  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
E0810 23:36:30.097571  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:36:30.737761  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:36:32.018416  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:36:34.579231  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:36:37.345944  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:36:39.700255  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:36:49.941340  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:37:10.421635  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:37:13.664654  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:37:27.726911  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:37:36.842031  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
E0810 23:37:36.847298  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
E0810 23:37:36.857463  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
E0810 23:37:36.877793  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
E0810 23:37:36.918099  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
E0810 23:37:36.998446  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
E0810 23:37:37.158899  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
E0810 23:37:37.479509  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:37:38.120480  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:37:41.961256  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:37:47.081456  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:37:51.382491  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:37:57.322402  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:37:59.266620  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
E0810 23:37:59.309886  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:38:17.803020  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:38:46.178715  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:38:58.763244  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:39:13.303579  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:39:13.862571  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:39:22.357671  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:39:43.882158  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:40:11.567202  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:40:15.423466  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:40:20.683425  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:40:43.107645  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:41:13.893299  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:41:29.459996  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:41:57.144363  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:42:13.665306  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:42:36.842258  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:42:59.309674  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:43:04.524187  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:43:36.712603  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:43:46.178573  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:44:16.939931  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:44:43.882559  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:45:15.424073  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225249-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:46:13.892815  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:46:29.458992  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:47:13.665602  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0810 23:47:36.842973  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225248-345780/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
net_test.go:145: ***** TestNetworkPlugins/group/kindnet/NetCatPod: pod "app=netcat" failed to start within 15m0s: timed out waiting for the condition ****
net_test.go:145: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kindnet-20210810225249-345780 -n kindnet-20210810225249-345780
net_test.go:145: TestNetworkPlugins/group/kindnet/NetCatPod: showing logs for failed pods as of 2021-08-10 23:47:45.942598284 +0000 UTC m=+5295.771082078
net_test.go:145: (dbg) Run:  kubectl --context kindnet-20210810225249-345780 describe po netcat-66fbc655d5-dksbd -n default
net_test.go:145: (dbg) Non-zero exit: kubectl --context kindnet-20210810225249-345780 describe po netcat-66fbc655d5-dksbd -n default: context deadline exceeded (1.389µs)
net_test.go:145: kubectl --context kindnet-20210810225249-345780 describe po netcat-66fbc655d5-dksbd -n default: context deadline exceeded
net_test.go:145: (dbg) Run:  kubectl --context kindnet-20210810225249-345780 logs netcat-66fbc655d5-dksbd -n default
net_test.go:145: (dbg) Non-zero exit: kubectl --context kindnet-20210810225249-345780 logs netcat-66fbc655d5-dksbd -n default: context deadline exceeded (420ns)
net_test.go:145: kubectl --context kindnet-20210810225249-345780 logs netcat-66fbc655d5-dksbd -n default: context deadline exceeded
net_test.go:146: failed waiting for netcat pod: app=netcat within 15m0s: timed out waiting for the condition
--- FAIL: TestNetworkPlugins/group/kindnet/NetCatPod (900.61s)

                                                
                                    

Test pass (200/237)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 5.65
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.07
10 TestDownloadOnly/v1.21.3/json-events 5.84
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.07
17 TestDownloadOnly/v1.22.0-rc.0/json-events 6.04
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.39
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.24
25 TestDownloadOnlyKic 12.81
26 TestOffline 109.71
29 TestAddons/parallel/Registry 21.56
31 TestAddons/parallel/MetricsServer 5.87
32 TestAddons/parallel/HelmTiller 13.1
33 TestAddons/parallel/Olm 45.89
34 TestAddons/parallel/CSI 45.37
35 TestAddons/parallel/GCPAuth 43.79
36 TestCertOptions 42.59
38 TestForceSystemdFlag 47.56
39 TestForceSystemdEnv 40.36
40 TestKVMDriverInstallOrUpdate 2.01
44 TestErrorSpam/setup 30.67
45 TestErrorSpam/start 1
46 TestErrorSpam/status 0.95
47 TestErrorSpam/pause 2.41
48 TestErrorSpam/unpause 1.34
49 TestErrorSpam/stop 6.56
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 71.09
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 5.83
56 TestFunctional/serial/KubeContext 0.05
57 TestFunctional/serial/KubectlGetPods 0.21
60 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
61 TestFunctional/serial/CacheCmd/cache/add_local 1.03
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
63 TestFunctional/serial/CacheCmd/cache/list 0.06
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
65 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
66 TestFunctional/serial/CacheCmd/cache/delete 0.11
67 TestFunctional/serial/MinikubeKubectlCmd 0.12
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
69 TestFunctional/serial/ExtraConfig 65.11
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1.11
72 TestFunctional/serial/LogsFileCmd 1.13
74 TestFunctional/parallel/ConfigCmd 0.45
75 TestFunctional/parallel/DashboardCmd 3.91
76 TestFunctional/parallel/DryRun 0.64
77 TestFunctional/parallel/InternationalLanguage 0.28
78 TestFunctional/parallel/StatusCmd 1.04
81 TestFunctional/parallel/ServiceCmd 22.84
82 TestFunctional/parallel/AddonsCmd 0.17
83 TestFunctional/parallel/PersistentVolumeClaim 46.22
85 TestFunctional/parallel/SSHCmd 0.61
86 TestFunctional/parallel/CpCmd 0.58
87 TestFunctional/parallel/MySQL 25.63
88 TestFunctional/parallel/FileSync 0.29
89 TestFunctional/parallel/CertSync 1.78
93 TestFunctional/parallel/NodeLabels 0.07
94 TestFunctional/parallel/LoadImage 2.54
95 TestFunctional/parallel/RemoveImage 3.05
96 TestFunctional/parallel/LoadImageFromFile 2.39
97 TestFunctional/parallel/BuildImage 4.84
98 TestFunctional/parallel/ListImages 0.4
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
101 TestFunctional/parallel/Version/short 0.06
102 TestFunctional/parallel/Version/components 0.75
104 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
107 TestFunctional/parallel/ProfileCmd/profile_list 0.4
108 TestFunctional/parallel/MountCmd/any-port 4.96
109 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
110 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
111 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
115 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
119 TestFunctional/parallel/MountCmd/specific-port 1.83
120 TestFunctional/delete_busybox_image 0.09
121 TestFunctional/delete_my-image_image 0.04
122 TestFunctional/delete_minikube_cached_images 0.04
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.35
147 TestKicCustomNetwork/create_custom_network 33.32
148 TestKicCustomNetwork/use_default_bridge_network 26.29
149 TestKicExistingNetwork 26.95
150 TestMainNoArgs 0.06
153 TestMultiNode/serial/FreshStart2Nodes 122.94
154 TestMultiNode/serial/DeployApp2Nodes 4.49
156 TestMultiNode/serial/AddNode 29.83
157 TestMultiNode/serial/ProfileList 0.31
158 TestMultiNode/serial/CopyFile 2.5
159 TestMultiNode/serial/StopNode 2.56
160 TestMultiNode/serial/StartAfterStop 30.87
161 TestMultiNode/serial/RestartKeepsNodes 137.48
162 TestMultiNode/serial/DeleteNode 5.51
163 TestMultiNode/serial/StopMultiNode 41.35
164 TestMultiNode/serial/RestartMultiNode 70.12
165 TestMultiNode/serial/ValidateNameConflict 31.09
171 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
172 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 11.54
174 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
175 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 10.09
177 TestDebPackageInstall/install_amd64_debian:10/minikube 0
178 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 10.34
180 TestDebPackageInstall/install_amd64_debian:9/minikube 0
181 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.2
183 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
184 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 15.07
186 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
187 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 13.93
189 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
190 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 15.21
192 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
193 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 13.15
199 TestInsufficientStorage 13.51
203 TestMissingContainerUpgrade 130.14
212 TestPause/serial/Start 104.84
220 TestNetworkPlugins/group/false 0.71
225 TestStartStop/group/old-k8s-version/serial/FirstStart 115.74
226 TestPause/serial/SecondStartNoReconfiguration 6.33
227 TestPause/serial/Pause 0.71
228 TestPause/serial/VerifyStatus 0.34
229 TestPause/serial/Unpause 0.68
232 TestStartStop/group/no-preload/serial/FirstStart 131.42
233 TestPause/serial/DeletePaused 4.7
234 TestPause/serial/VerifyDeletedResources 3.89
236 TestStartStop/group/embed-certs/serial/FirstStart 77.17
237 TestStartStop/group/old-k8s-version/serial/DeployApp 9.52
238 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.89
239 TestStartStop/group/old-k8s-version/serial/Stop 20.91
240 TestStartStop/group/embed-certs/serial/DeployApp 9.4
241 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.75
243 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
244 TestStartStop/group/old-k8s-version/serial/SecondStart 634.96
245 TestStartStop/group/no-preload/serial/DeployApp 8.51
246 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
248 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
249 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.22
250 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
251 TestStartStop/group/old-k8s-version/serial/Pause 2.78
253 TestStartStop/group/default-k8s-different-port/serial/FirstStart 69.84
254 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.56
255 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.78
258 TestStartStop/group/newest-cni/serial/FirstStart 53.21
259 TestNetworkPlugins/group/auto/Start 91.97
260 TestStartStop/group/newest-cni/serial/DeployApp 0
261 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.72
262 TestStartStop/group/newest-cni/serial/Stop 17.47
263 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
264 TestStartStop/group/newest-cni/serial/SecondStart 26.01
265 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
266 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
267 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
268 TestStartStop/group/newest-cni/serial/Pause 2.58
269 TestNetworkPlugins/group/custom-weave/Start 75.07
270 TestNetworkPlugins/group/auto/KubeletFlags 0.29
271 TestNetworkPlugins/group/auto/NetCatPod 9.49
272 TestNetworkPlugins/group/auto/DNS 0.17
273 TestNetworkPlugins/group/auto/Localhost 0.14
274 TestNetworkPlugins/group/auto/HairPin 0.17
275 TestNetworkPlugins/group/cilium/Start 76.41
276 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.31
277 TestNetworkPlugins/group/custom-weave/NetCatPod 10.44
279 TestNetworkPlugins/group/cilium/ControllerPod 5.02
280 TestNetworkPlugins/group/cilium/KubeletFlags 0.29
281 TestNetworkPlugins/group/cilium/NetCatPod 9.34
282 TestNetworkPlugins/group/cilium/DNS 0.18
283 TestNetworkPlugins/group/cilium/Localhost 0.16
284 TestNetworkPlugins/group/cilium/HairPin 0.15
285 TestNetworkPlugins/group/enable-default-cni/Start 49.81
286 TestNetworkPlugins/group/kindnet/Start 97.18
287 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
288 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.26
289 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
290 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
291 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
292 TestNetworkPlugins/group/bridge/Start 50.2
293 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
294 TestNetworkPlugins/group/bridge/NetCatPod 9.26
295 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
298 TestNetworkPlugins/group/bridge/DNS 0.18
299 TestNetworkPlugins/group/bridge/Localhost 0.17
300 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.14.0/json-events (5.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221930-345780 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221930-345780 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.649028444s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (5.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210810221930-345780
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210810221930-345780: exit status 85 (70.815145ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:19:30
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:19:30.276950  345792 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:19:30.277031  345792 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:19:30.277037  345792 out.go:311] Setting ErrFile to fd 2...
	I0810 22:19:30.277041  345792 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:19:30.277172  345792 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	W0810 22:19:30.277316  345792 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: no such file or directory
	I0810 22:19:30.280578  345792 out.go:305] Setting JSON to true
	I0810 22:19:30.316703  345792 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":7332,"bootTime":1628626639,"procs":185,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:19:30.316845  345792 start.go:121] virtualization: kvm guest
	I0810 22:19:30.320716  345792 notify.go:169] Checking for updates...
	I0810 22:19:30.323193  345792 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:19:30.372100  345792 docker.go:132] docker version: linux-19.03.15
	I0810 22:19:30.372266  345792 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:19:30.454671  345792 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-10 22:19:30.406945208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:19:30.454802  345792 docker.go:244] overlay module found
	I0810 22:19:30.456955  345792 start.go:278] selected driver: docker
	I0810 22:19:30.456973  345792 start.go:751] validating driver "docker" against <nil>
	I0810 22:19:30.457485  345792 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:19:30.539915  345792 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-10 22:19:30.492601631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:19:30.540023  345792 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0810 22:19:30.540558  345792 start_flags.go:344] Using suggested 8000MB memory alloc based on sys=32179MB, container=32179MB
	I0810 22:19:30.540650  345792 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0810 22:19:30.540673  345792 cni.go:93] Creating CNI manager for ""
	I0810 22:19:30.540679  345792 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:19:30.540689  345792 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0810 22:19:30.540711  345792 start_flags.go:277] config:
	{Name:download-only-20210810221930-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210810221930-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:19:30.543094  345792 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:19:30.544641  345792 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0810 22:19:30.544757  345792 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0810 22:19:30.581345  345792 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0810 22:19:30.581377  345792 cache.go:56] Caching tarball of preloaded images
	I0810 22:19:30.581689  345792 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0810 22:19:30.584375  345792 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:19:30.621657  345792 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:70b8731eaaa1b4de2d1cd60021fc1260 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0810 22:19:30.635762  345792 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:19:30.635793  345792 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:19:33.833245  345792 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:19:33.833339  345792 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:19:35.179255  345792 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on crio
	I0810 22:19:35.179579  345792 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/download-only-20210810221930-345780/config.json ...
	I0810 22:19:35.179614  345792 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/download-only-20210810221930-345780/config.json: {Name:mk399c4af8267410dc92185bf1c2b9666ac413cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:19:35.179868  345792 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0810 22:19:35.180748  345792 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.14.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210810221930-345780"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (5.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221930-345780 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221930-345780 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.834888036s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (5.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210810221930-345780
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210810221930-345780: exit status 85 (69.852883ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:19:35
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:19:35.994176  345934 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:19:35.994284  345934 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:19:35.994290  345934 out.go:311] Setting ErrFile to fd 2...
	I0810 22:19:35.994294  345934 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:19:35.994420  345934 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	W0810 22:19:35.994552  345934 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: no such file or directory
	I0810 22:19:35.994687  345934 out.go:305] Setting JSON to true
	I0810 22:19:36.029884  345934 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":7337,"bootTime":1628626639,"procs":182,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:19:36.029996  345934 start.go:121] virtualization: kvm guest
	I0810 22:19:36.033459  345934 notify.go:169] Checking for updates...
	W0810 22:19:36.035760  345934 start.go:659] api.Load failed for download-only-20210810221930-345780: filestore "download-only-20210810221930-345780": Docker machine "download-only-20210810221930-345780" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0810 22:19:36.035832  345934 driver.go:335] Setting default libvirt URI to qemu:///system
	W0810 22:19:36.035871  345934 start.go:659] api.Load failed for download-only-20210810221930-345780: filestore "download-only-20210810221930-345780": Docker machine "download-only-20210810221930-345780" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0810 22:19:36.083707  345934 docker.go:132] docker version: linux-19.03.15
	I0810 22:19:36.083825  345934 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:19:36.162965  345934 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-10 22:19:36.117723279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:19:36.163065  345934 docker.go:244] overlay module found
	I0810 22:19:36.165205  345934 start.go:278] selected driver: docker
	I0810 22:19:36.165226  345934 start.go:751] validating driver "docker" against &{Name:download-only-20210810221930-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210810221930-345780 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:19:36.165788  345934 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:19:36.245207  345934 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-10 22:19:36.199977555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:19:36.245792  345934 cni.go:93] Creating CNI manager for ""
	I0810 22:19:36.245811  345934 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:19:36.245824  345934 start_flags.go:277] config:
	{Name:download-only-20210810221930-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210810221930-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:19:36.248176  345934 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:19:36.249955  345934 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:19:36.250154  345934 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0810 22:19:36.287233  345934 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 22:19:36.287269  345934 cache.go:56] Caching tarball of preloaded images
	I0810 22:19:36.287537  345934 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:19:36.289776  345934 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:19:36.326313  345934 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:5b844d0f443dc130a4f324a367701516 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 22:19:36.335244  345934 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:19:36.335270  345934 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:19:40.129064  345934 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:19:40.129152  345934 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210810221930-345780"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (6.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221930-345780 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221930-345780 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.044323166s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (6.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210810221930-345780
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210810221930-345780: exit status 85 (71.478135ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:19:41
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:19:41.902523  346080 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:19:41.902605  346080 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:19:41.902609  346080 out.go:311] Setting ErrFile to fd 2...
	I0810 22:19:41.902612  346080 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:19:41.902737  346080 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	W0810 22:19:41.902876  346080 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: no such file or directory
	I0810 22:19:41.902986  346080 out.go:305] Setting JSON to true
	I0810 22:19:41.938223  346080 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":7343,"bootTime":1628626639,"procs":182,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:19:41.938772  346080 start.go:121] virtualization: kvm guest
	I0810 22:19:41.942055  346080 notify.go:169] Checking for updates...
	W0810 22:19:41.944336  346080 start.go:659] api.Load failed for download-only-20210810221930-345780: filestore "download-only-20210810221930-345780": Docker machine "download-only-20210810221930-345780" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0810 22:19:41.944394  346080 driver.go:335] Setting default libvirt URI to qemu:///system
	W0810 22:19:41.944424  346080 start.go:659] api.Load failed for download-only-20210810221930-345780: filestore "download-only-20210810221930-345780": Docker machine "download-only-20210810221930-345780" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0810 22:19:41.989771  346080 docker.go:132] docker version: linux-19.03.15
	I0810 22:19:41.989891  346080 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:19:42.065998  346080 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-10 22:19:42.022981434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:19:42.066086  346080 docker.go:244] overlay module found
	I0810 22:19:42.068245  346080 start.go:278] selected driver: docker
	I0810 22:19:42.068270  346080 start.go:751] validating driver "docker" against &{Name:download-only-20210810221930-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210810221930-345780 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:19:42.068861  346080 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:19:42.149529  346080 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2021-08-10 22:19:42.103676095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:19:42.150789  346080 cni.go:93] Creating CNI manager for ""
	I0810 22:19:42.150812  346080 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0810 22:19:42.150848  346080 start_flags.go:277] config:
	{Name:download-only-20210810221930-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210810221930-345780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:19:42.153804  346080 cache.go:117] Beginning downloading kic base image for docker with crio
	I0810 22:19:42.155347  346080 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0810 22:19:42.155383  346080 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0810 22:19:42.189672  346080 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0810 22:19:42.189709  346080 cache.go:56] Caching tarball of preloaded images
	I0810 22:19:42.190052  346080 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0810 22:19:42.192821  346080 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:19:42.228328  346080 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:c7902b63f7bbc786f5f337da25a17477 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0810 22:19:42.242470  346080 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0810 22:19:42.242515  346080 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0810 22:19:45.894675  346080 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:19:45.894796  346080 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210810221930-345780"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210810221930-345780
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestDownloadOnlyKic (12.81s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20210810221948-345780 --force --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20210810221948-345780 --force --alsologtostderr --driver=docker  --container-runtime=crio: (11.346186413s)
helpers_test.go:176: Cleaning up "download-docker-20210810221948-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20210810221948-345780
--- PASS: TestDownloadOnlyKic (12.81s)

                                                
                                    
x
+
TestOffline (109.71s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20210810224957-345780 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20210810224957-345780 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m46.536766664s)
helpers_test.go:176: Cleaning up "offline-crio-20210810224957-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-20210810224957-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20210810224957-345780: (3.175271037s)
--- PASS: TestOffline (109.71s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 12.314501ms
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:340: "registry-42sw9" [ad871fd4-a4ea-463a-9d90-19741a4ffbcb] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009172059s
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:340: "registry-proxy-jbgjs" [11981bc4-eb03-45e7-bc8c-b5a04d3ed1dd] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008570282s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210810222001-345780 delete po -l run=registry-test --now
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210810222001-345780 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Done: kubectl --context addons-20210810222001-345780 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.851888489s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810222001-345780 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810222001-345780 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (21.56s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 1.820706ms
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:340: "metrics-server-77c99ccb96-87mh9" [19d2c0d3-1f9a-41b0-a39e-55b7b5644aec] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.052447685s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210810222001-345780 top pods -n kube-system
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810222001-345780 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.1s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: tiller-deploy stabilized in 11.973461ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:340: "tiller-deploy-768d69497-95j6c" [532cd5c7-b8b2-4b22-90de-521ff0e324e3] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008491869s
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210810222001-345780 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210810222001-345780 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (7.71111755s)
addons_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810222001-345780 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.10s)

                                                
                                    
x
+
TestAddons/parallel/Olm (45.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 2.139062ms
addons_test.go:467: olm-operator stabilized in 4.248216ms
addons_test.go:471: packageserver stabilized in 6.498703ms
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...
helpers_test.go:340: "catalog-operator-75d496484d-hrvkz" [091164a1-8034-47af-b134-a089e218791c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.008506116s
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...
helpers_test.go:340: "olm-operator-859c88c96-rc8mk" [1d2e183d-c572-4478-b725-8aea852eceb2] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.007254516s
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:340: "packageserver-675b7f455c-4qc9n" [c0d24baa-a40e-4400-8cd5-db1daf01eea6] Running
helpers_test.go:340: "packageserver-675b7f455c-tltfn" [1f5c2acd-356a-4ab4-b2b1-4d574300e906] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:340: "packageserver-675b7f455c-4qc9n" [c0d24baa-a40e-4400-8cd5-db1daf01eea6] Running
helpers_test.go:340: "packageserver-675b7f455c-tltfn" [1f5c2acd-356a-4ab4-b2b1-4d574300e906] Running
helpers_test.go:340: "packageserver-675b7f455c-4qc9n" [c0d24baa-a40e-4400-8cd5-db1daf01eea6] Running
helpers_test.go:340: "packageserver-675b7f455c-tltfn" [1f5c2acd-356a-4ab4-b2b1-4d574300e906] Running
helpers_test.go:340: "packageserver-675b7f455c-4qc9n" [c0d24baa-a40e-4400-8cd5-db1daf01eea6] Running
helpers_test.go:340: "packageserver-675b7f455c-tltfn" [1f5c2acd-356a-4ab4-b2b1-4d574300e906] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:340: "packageserver-675b7f455c-4qc9n" [c0d24baa-a40e-4400-8cd5-db1daf01eea6] Running
helpers_test.go:340: "packageserver-675b7f455c-tltfn" [1f5c2acd-356a-4ab4-b2b1-4d574300e906] Running
helpers_test.go:340: "packageserver-675b7f455c-4qc9n" [c0d24baa-a40e-4400-8cd5-db1daf01eea6] Running
addons_test.go:479: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.00791473s
addons_test.go:482: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:340: "operatorhubio-catalog-7pg5t" [f1fbb16d-f5c6-4637-83ce-77da6a378e9c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.006980564s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210810222001-345780 create -f testdata/etcd.yaml
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210810222001-345780 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210810222001-345780 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210810222001-345780 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210810222001-345780 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210810222001-345780 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210810222001-345780 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210810222001-345780 get csv -n my-etcd
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210810222001-345780 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (45.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 18.06637ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210810222001-345780 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:390: (dbg) Run:  kubectl --context addons-20210810222001-345780 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210810222001-345780 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:340: "task-pv-pod" [93b530a5-3d99-4e15-bdc5-f4e6252ff5cb] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:340: "task-pv-pod" [93b530a5-3d99-4e15-bdc5-f4e6252ff5cb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:340: "task-pv-pod" [93b530a5-3d99-4e15-bdc5-f4e6252ff5cb] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.007024958s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210810222001-345780 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:415: (dbg) Run:  kubectl --context addons-20210810222001-345780 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:415: (dbg) Run:  kubectl --context addons-20210810222001-345780 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210810222001-345780 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-20210810222001-345780 delete pod task-pv-pod: (2.784320354s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210810222001-345780 delete pvc hpvc

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210810222001-345780 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
2021/08/10 22:23:20 [DEBUG] GET http://192.168.49.2:5000

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:390: (dbg) Run:  kubectl --context addons-20210810222001-345780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210810222001-345780 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:340: "task-pv-pod-restore" [53a1b48d-a826-48a5-8395-7d7148ec9b78] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:340: "task-pv-pod-restore" [53a1b48d-a826-48a5-8395-7d7148ec9b78] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:340: "task-pv-pod-restore" [53a1b48d-a826-48a5-8395-7d7148ec9b78] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.00681401s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210810222001-345780 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-20210810222001-345780 delete pod task-pv-pod-restore: (2.428870449s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210810222001-345780 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210810222001-345780 delete volumesnapshot new-snapshot-demo

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810222001-345780 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-20210810222001-345780 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.322189281s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810222001-345780 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (45.37s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (43.79s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210810222001-345780 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [000bda3a-19ad-4075-b51b-fc3f3155d7d1] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:340: "busybox" [000bda3a-19ad-4075-b51b-fc3f3155d7d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:340: "busybox" [000bda3a-19ad-4075-b51b-fc3f3155d7d1] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 8.0071735s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210810222001-345780 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210810222001-345780 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:683: (dbg) Run:  kubectl --context addons-20210810222001-345780 apply -f testdata/private-image.yaml
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:340: "private-image-7ff9c8c74f-gmpwx" [97c4e69d-d169-4f93-b3f4-863f3f06fe94] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:340: "private-image-7ff9c8c74f-gmpwx" [97c4e69d-d169-4f93-b3f4-863f3f06fe94] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 17.007036133s
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210810222001-345780 apply -f testdata/private-image-eu.yaml
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:340: "private-image-eu-5956d58f9f-cfdfl" [86aaa1d0-38cb-4fdd-862e-f8824912e5a7] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:340: "private-image-eu-5956d58f9f-cfdfl" [86aaa1d0-38cb-4fdd-862e-f8824912e5a7] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:340: "private-image-eu-5956d58f9f-cfdfl" [86aaa1d0-38cb-4fdd-862e-f8824912e5a7] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 11.008133621s
addons_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810222001-345780 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:709: (dbg) Done: out/minikube-linux-amd64 -p addons-20210810222001-345780 addons disable gcp-auth --alsologtostderr -v=1: (6.678745895s)
--- PASS: TestAddons/parallel/GCPAuth (43.79s)

                                                
                                    
x
+
TestCertOptions (42.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210810225357-345780 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210810225357-345780 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (39.010259633s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210810225357-345780 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210810225357-345780 config view
helpers_test.go:176: Cleaning up "cert-options-20210810225357-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210810225357-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210810225357-345780: (3.185465614s)
--- PASS: TestCertOptions (42.59s)

                                                
                                    
x
+
TestForceSystemdFlag (47.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210810225249-345780 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0810 22:52:59.311468  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210810225249-345780 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.743731188s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20210810225249-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210810225249-345780
E0810 22:53:36.709845  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210810225249-345780: (2.815193515s)
--- PASS: TestForceSystemdFlag (47.56s)

                                                
                                    
x
+
TestForceSystemdEnv (40.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210810225337-345780 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210810225337-345780 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.505395167s)
helpers_test.go:176: Cleaning up "force-systemd-env-20210810225337-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210810225337-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210810225337-345780: (4.8561597s)
--- PASS: TestForceSystemdEnv (40.36s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.01s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.01s)

                                                
                                    
x
+
TestErrorSpam/setup (30.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210810222853-345780 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210810222853-345780 --driver=docker  --container-runtime=crio
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210810222853-345780 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210810222853-345780 --driver=docker  --container-runtime=crio: (30.668382402s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (30.67s)

                                                
                                    
x
+
TestErrorSpam/start (1s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 start --dry-run
--- PASS: TestErrorSpam/start (1.00s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (2.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 pause: (1.559999073s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 pause
--- PASS: TestErrorSpam/pause (2.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 unpause
--- PASS: TestErrorSpam/unpause (1.34s)

                                                
                                    
x
+
TestErrorSpam/stop (6.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 stop: (6.268231553s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222853-345780 --log_dir /tmp/nospam-20210810222853-345780 stop
--- PASS: TestErrorSpam/stop (6.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/test/nested/copy/345780/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (71.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222942-345780 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:1982: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210810222942-345780 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m11.09410583s)
--- PASS: TestFunctional/serial/StartWithProxy (71.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222942-345780 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210810222942-345780 --alsologtostderr -v=8: (5.825052817s)
functional_test.go:631: soft start took 5.825741186s for "functional-20210810222942-345780" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210810222942-345780 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222942-345780 cache add k8s.gcr.io/pause:3.3: (1.175438201s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222942-345780 cache add k8s.gcr.io/pause:latest: (1.224615324s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210810222942-345780 /tmp/functional-20210810222942-345780709847981
functional_test.go:1024: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 cache add minikube-local-cache-test:functional-20210810222942-345780
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 cache delete minikube-local-cache-test:functional-20210810222942-345780
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210810222942-345780
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (289.444027ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 cache reload
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 kubectl -- --context functional-20210810222942-345780 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210810222942-345780 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (65.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222942-345780 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:715: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210810222942-345780 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m5.104708345s)
functional_test.go:719: restart took 1m5.104854017s for "functional-20210810222942-345780" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (65.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210810222942-345780 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 logs
functional_test.go:1165: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222942-345780 logs: (1.109835686s)
--- PASS: TestFunctional/serial/LogsCmd (1.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 logs --file /tmp/functional-20210810222942-345780170947368/logs.txt
functional_test.go:1181: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222942-345780 logs --file /tmp/functional-20210810222942-345780170947368/logs.txt: (1.127262522s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222942-345780 config get cpus: exit status 14 (63.149462ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222942-345780 config get cpus: exit status 14 (62.813143ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210810222942-345780 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:862: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210810222942-345780 --alsologtostderr -v=1] ...
helpers_test.go:504: unable to kill pid 391068: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222942-345780 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210810222942-345780 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (274.324598ms)

                                                
                                                
-- stdout --
	* [functional-20210810222942-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:32:41.921864  390066 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:32:41.921967  390066 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:32:41.921986  390066 out.go:311] Setting ErrFile to fd 2...
	I0810 22:32:41.921989  390066 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:32:41.922114  390066 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:32:41.922374  390066 out.go:305] Setting JSON to false
	I0810 22:32:41.961475  390066 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":8123,"bootTime":1628626639,"procs":248,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:32:41.961601  390066 start.go:121] virtualization: kvm guest
	I0810 22:32:41.964647  390066 out.go:177] * [functional-20210810222942-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:32:41.966486  390066 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:32:41.967989  390066 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:32:41.969502  390066 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:32:41.971027  390066 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:32:41.972053  390066 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:32:42.023275  390066 docker.go:132] docker version: linux-19.03.15
	I0810 22:32:42.023399  390066 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:32:42.115260  390066 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-08-10 22:32:42.062880692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:32:42.115368  390066 docker.go:244] overlay module found
	I0810 22:32:42.117924  390066 out.go:177] * Using the docker driver based on existing profile
	I0810 22:32:42.117959  390066 start.go:278] selected driver: docker
	I0810 22:32:42.117968  390066 start.go:751] validating driver "docker" against &{Name:functional-20210810222942-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210810222942-345780 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-
provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:32:42.118117  390066 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0810 22:32:42.118169  390066 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:32:42.118192  390066 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0810 22:32:42.119858  390066 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:32:42.121985  390066 out.go:177] 
	W0810 22:32:42.122123  390066 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0810 22:32:42.123583  390066 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222942-345780 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222942-345780 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210810222942-345780 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (275.609645ms)

                                                
                                                
-- stdout --
	* [functional-20210810222942-345780] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:32:42.531943  390410 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:32:42.532183  390410 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:32:42.532195  390410 out.go:311] Setting ErrFile to fd 2...
	I0810 22:32:42.532199  390410 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:32:42.532382  390410 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:32:42.532668  390410 out.go:305] Setting JSON to false
	I0810 22:32:42.577318  390410 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":8124,"bootTime":1628626639,"procs":250,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:32:42.577460  390410 start.go:121] virtualization: kvm guest
	I0810 22:32:42.580327  390410 out.go:177] * [functional-20210810222942-345780] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	I0810 22:32:42.581942  390410 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:32:42.583637  390410 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:32:42.585184  390410 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:32:42.586750  390410 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:32:42.587632  390410 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:32:42.641569  390410 docker.go:132] docker version: linux-19.03.15
	I0810 22:32:42.641696  390410 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:32:42.731870  390410 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2021-08-10 22:32:42.68180534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:32:42.731973  390410 docker.go:244] overlay module found
	I0810 22:32:42.734408  390410 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0810 22:32:42.734445  390410 start.go:278] selected driver: docker
	I0810 22:32:42.734453  390410 start.go:751] validating driver "docker" against &{Name:functional-20210810222942-345780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210810222942-345780 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-
provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:32:42.734567  390410 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0810 22:32:42.734608  390410 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:32:42.734627  390410 out.go:242] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0810 22:32:42.736287  390410 out.go:177]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:32:42.738515  390410 out.go:177] 
	W0810 22:32:42.738637  390410 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0810 22:32:42.740154  390410 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:826: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (22.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210810222942-345780 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210810222942-345780 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:340: "hello-node-6cbfcd7cbc-zqvb5" [0c14e6ad-2e8b-4935-87bf-122fb3745295] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:340: "hello-node-6cbfcd7cbc-zqvb5" [0c14e6ad-2e8b-4935-87bf-122fb3745295] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 21.072212258s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 service --namespace=default --https --url hello-node
functional_test.go:1394: found endpoint: https://192.168.49.2:31475
functional_test.go:1405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1420: found endpoint for hello-node: http://192.168.49.2:31475
functional_test.go:1431: Attempting to fetch http://192.168.49.2:31475 ...
functional_test.go:1450: http://192.168.49.2:31475: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-zqvb5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31475
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (22.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:340: "storage-provisioner" [87e03d60-087e-43b8-819b-874eead2eed4] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008854371s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210810222942-345780 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210810222942-345780 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210810222942-345780 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210810222942-345780 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:340: "sp-pod" [52c21173-19be-4117-968f-255dfd36a9ad] Pending
helpers_test.go:340: "sp-pod" [52c21173-19be-4117-968f-255dfd36a9ad] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:340: "sp-pod" [52c21173-19be-4117-968f-255dfd36a9ad] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.006654261s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210810222942-345780 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210810222942-345780 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210810222942-345780 delete -f testdata/storage-provisioner/pod.yaml: (13.089348198s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210810222942-345780 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:340: "sp-pod" [e40422e2-b601-42cd-977d-27f588100952] Pending
helpers_test.go:340: "sp-pod" [e40422e2-b601-42cd-977d-27f588100952] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0810 22:32:59.310551  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 22:32:59.316476  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 22:32:59.326731  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 22:32:59.347036  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 22:32:59.387361  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 22:32:59.467707  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 22:32:59.628138  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
helpers_test.go:340: "sp-pod" [e40422e2-b601-42cd-977d-27f588100952] Running
E0810 22:32:59.948350  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 22:33:00.588661  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 22:33:01.869503  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 22:33:04.429692  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006275995s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210810222942-345780 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:546: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210810222942-345780 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:340: "mysql-9bbbc5bbb-8nd7c" [7a1cb6b1-eb17-4330-83b7-e6335f61fb58] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:340: "mysql-9bbbc5bbb-8nd7c" [7a1cb6b1-eb17-4330-83b7-e6335f61fb58] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:340: "mysql-9bbbc5bbb-8nd7c" [7a1cb6b1-eb17-4330-83b7-e6335f61fb58] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.016138689s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210810222942-345780 exec mysql-9bbbc5bbb-8nd7c -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210810222942-345780 exec mysql-9bbbc5bbb-8nd7c -- mysql -ppassword -e "show databases;": exit status 1 (285.214568ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210810222942-345780 exec mysql-9bbbc5bbb-8nd7c -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210810222942-345780 exec mysql-9bbbc5bbb-8nd7c -- mysql -ppassword -e "show databases;": exit status 1 (329.667022ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210810222942-345780 exec mysql-9bbbc5bbb-8nd7c -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210810222942-345780 exec mysql-9bbbc5bbb-8nd7c -- mysql -ppassword -e "show databases;": exit status 1 (247.890851ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210810222942-345780 exec mysql-9bbbc5bbb-8nd7c -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.63s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/345780/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo cat /etc/test/nested/copy/345780/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/345780.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo cat /etc/ssl/certs/345780.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/345780.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo cat /usr/share/ca-certificates/345780.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3457802.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo cat /etc/ssl/certs/3457802.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/3457802.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo cat /usr/share/ca-certificates/3457802.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210810222942-345780 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210810222942-345780
functional_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 image load docker.io/library/busybox:load-functional-20210810222942-345780

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222942-345780 image load docker.io/library/busybox:load-functional-20210810222942-345780: (1.055579932s)
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210810222942-345780 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210810222942-345780
functional_test.go:373: (dbg) Done: out/minikube-linux-amd64 ssh -p functional-20210810222942-345780 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210810222942-345780: (1.196325879s)
--- PASS: TestFunctional/parallel/LoadImage (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210810222942-345780
functional_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 image load docker.io/library/busybox:remove-functional-20210810222942-345780

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222942-345780 image load docker.io/library/busybox:remove-functional-20210810222942-345780: (2.159463099s)
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 image rm docker.io/library/busybox:remove-functional-20210810222942-345780

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210810222942-345780 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210810222942-345780
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210810222942-345780
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 image load /home/jenkins/workspace/Docker_Linux_crio_integration/busybox.tar

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222942-345780 image load /home/jenkins/workspace/Docker_Linux_crio_integration/busybox.tar: (1.463650309s)
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210810222942-345780 -- sudo crictl images
--- PASS: TestFunctional/parallel/LoadImageFromFile (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (4.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 image build -t localhost/my-image:functional-20210810222942-345780 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222942-345780 image build -t localhost/my-image:functional-20210810222942-345780 testdata/build: (4.508476166s)
functional_test.go:412: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210810222942-345780 image build -t localhost/my-image:functional-20210810222942-345780 testdata/build:
STEP 1: FROM busybox
STEP 2: RUN true
--> e6f04aec1f8
STEP 3: ADD content.txt /
STEP 4: COMMIT localhost/my-image:functional-20210810222942-345780
--> 0f35c53f6df
Successfully tagged localhost/my-image:functional-20210810222942-345780
0f35c53f6dfaf29ad445c9a94cb3e9c563165556fab31864bfd700aff035561f
functional_test.go:415: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210810222942-345780 image build -t localhost/my-image:functional-20210810222942-345780 testdata/build:
Resolved "busybox" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/busybox:latest...
Getting image source signatures
Copying blob sha256:b71f96345d44b237decc0c2d6c2f9ad0d17fde83dad7579608f1f0764d9686f2
Copying config sha256:69593048aa3acfee0f75f20b77acb549de2472063053f6730c4091b53f2dfb02
Writing manifest to image destination
Storing signatures
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210810222942-345780 -- sudo crictl inspecti localhost/my-image:functional-20210810222942-345780
--- PASS: TestFunctional/parallel/BuildImage (4.84s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210810222942-345780 image ls:
localhost/minikube-local-cache-test:functional-20210810222942-345780
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo systemctl is-active docker": exit status 1 (316.364525ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo systemctl is-active containerd"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo systemctl is-active containerd": exit status 1 (307.251971ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210810222942-345780 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1245: Took "336.685336ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1259: Took "63.90241ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210810222942-345780 /tmp/mounttest051364967:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628634760257897022" to /tmp/mounttest051364967/created-by-test
functional_test_mount_test.go:110: wrote "test-1628634760257897022" to /tmp/mounttest051364967/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628634760257897022" to /tmp/mounttest051364967/test-1628634760257897022
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (312.057482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 10 22:32 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 10 22:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 10 22:32 test-1628634760257897022
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh cat /mount-9p/test-1628634760257897022

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210810222942-345780 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:340: "busybox-mount" [e02da299-a37d-4cb5-bec8-af7d2d23c695] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:340: "busybox-mount" [e02da299-a37d-4cb5-bec8-af7d2d23c695] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:340: "busybox-mount" [e02da299-a37d-4cb5-bec8-af7d2d23c695] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 2.015348135s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210810222942-345780 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210810222942-345780 /tmp/mounttest051364967:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1295: Took "357.337778ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1308: Took "59.038547ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210810222942-345780 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.102.123.172 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210810222942-345780 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210810222942-345780 /tmp/mounttest718256348:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.685113ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh -- ls -la /mount-9p
2021/08/10 22:32:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210810222942-345780 /tmp/mounttest718256348:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh "sudo umount -f /mount-9p": exit status 1 (276.107405ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20210810222942-345780 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210810222942-345780 /tmp/mounttest718256348:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210810222942-345780
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210810222942-345780
--- PASS: TestFunctional/delete_busybox_image (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210810222942-345780
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210810222942-345780
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.35s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210810223458-345780 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210810223458-345780 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.589394ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210810223458-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"7cfb1333-283b-400a-9834-f814fb8c4ffb","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig"},"datacontenttype":"application/json","id":"c918703f-b2dc-4fab-a68f-1054805fb956","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"d8b9386d-ca03-4e99-90dd-4668388b2a14","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube"},"datacontenttype":"application/json","id":"e648c268-89e0-48dd-bb1c-1d24fe7b6f46","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"a46a43eb-05c3-4b91-ae5f-cf5d873cf4ea","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"a6f93381-d1c9-46f9-b359-827dbf8694ae","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210810223458-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210810223458-345780
--- PASS: TestErrorJSONOutput (0.35s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210810223458-345780 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210810223458-345780 --network=: (30.767011214s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210810223458-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210810223458-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210810223458-345780: (2.51391719s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210810223532-345780 --network=bridge
E0810 22:35:43.154469  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210810223532-345780 --network=bridge: (23.927513071s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210810223532-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210810223532-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210810223532-345780: (2.324066958s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.29s)

                                                
                                    
x
+
TestKicExistingNetwork (26.95s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20210810223558-345780 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20210810223558-345780 --network=existing-network: (24.191118608s)
helpers_test.go:176: Cleaning up "existing-network-20210810223558-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20210810223558-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20210810223558-345780: (2.487898838s)
--- PASS: TestKicExistingNetwork (26.95s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210810223625-345780 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0810 22:37:13.665014  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:13.670332  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:13.680647  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:13.700977  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:13.741368  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:13.821695  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:13.982493  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:14.303061  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:14.943687  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:16.224163  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:18.784606  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:23.905500  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:34.145783  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:54.626143  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:37:59.310517  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
E0810 22:38:26.995062  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210810223625-345780 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m2.404728497s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- rollout status deployment/busybox
multinode_test.go:467: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- rollout status deployment/busybox: (2.406822492s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-crhdk -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-h8c2g -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-crhdk -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-h8c2g -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-crhdk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223625-345780 -- exec busybox-84b6686758-h8c2g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.49s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210810223625-345780 -v 3 --alsologtostderr
multinode_test.go:106: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210810223625-345780 -v 3 --alsologtostderr: (29.069630153s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.83s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 status --output json --alsologtostderr
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 cp testdata/cp-test.txt multinode-20210810223625-345780-m02:/home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 ssh -n multinode-20210810223625-345780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 cp testdata/cp-test.txt multinode-20210810223625-345780-m03:/home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 ssh -n multinode-20210810223625-345780-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210810223625-345780 node stop m03: (1.35314204s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210810223625-345780 status: exit status 7 (635.141399ms)

                                                
                                                
-- stdout --
	multinode-20210810223625-345780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210810223625-345780-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210810223625-345780-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210810223625-345780 status --alsologtostderr: exit status 7 (571.348805ms)

                                                
                                                
-- stdout --
	multinode-20210810223625-345780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210810223625-345780-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210810223625-345780-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:39:11.111105  422928 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:39:11.111207  422928 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:39:11.111211  422928 out.go:311] Setting ErrFile to fd 2...
	I0810 22:39:11.111215  422928 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:39:11.111338  422928 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:39:11.111533  422928 out.go:305] Setting JSON to false
	I0810 22:39:11.111558  422928 mustload.go:65] Loading cluster: multinode-20210810223625-345780
	I0810 22:39:11.111858  422928 status.go:253] checking status of multinode-20210810223625-345780 ...
	I0810 22:39:11.112260  422928 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780 --format={{.State.Status}}
	I0810 22:39:11.151948  422928 status.go:328] multinode-20210810223625-345780 host status = "Running" (err=<nil>)
	I0810 22:39:11.151979  422928 host.go:66] Checking if "multinode-20210810223625-345780" exists ...
	I0810 22:39:11.152235  422928 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210810223625-345780
	I0810 22:39:11.191151  422928 host.go:66] Checking if "multinode-20210810223625-345780" exists ...
	I0810 22:39:11.191466  422928 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:39:11.191512  422928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780
	I0810 22:39:11.230852  422928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33047 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780/id_rsa Username:docker}
	I0810 22:39:11.322573  422928 ssh_runner.go:149] Run: systemctl --version
	I0810 22:39:11.326096  422928 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:39:11.335549  422928 kubeconfig.go:93] found "multinode-20210810223625-345780" server: "https://192.168.49.2:8443"
	I0810 22:39:11.335576  422928 api_server.go:164] Checking apiserver status ...
	I0810 22:39:11.335607  422928 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:39:11.354449  422928 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1339/cgroup
	I0810 22:39:11.362783  422928 api_server.go:180] apiserver freezer: "9:freezer:/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/system.slice/crio-621651b937913af89a717bfd2db72743b1137803e5585c3a0f30a7b2e78876f0.scope"
	I0810 22:39:11.362867  422928 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/b91fa3f2886920ca6e967b035f0c0502903d62873700fa20faa09044b63170aa/system.slice/crio-621651b937913af89a717bfd2db72743b1137803e5585c3a0f30a7b2e78876f0.scope/freezer.state
	I0810 22:39:11.369987  422928 api_server.go:202] freezer state: "THAWED"
	I0810 22:39:11.370019  422928 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0810 22:39:11.375135  422928 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0810 22:39:11.375161  422928 status.go:419] multinode-20210810223625-345780 apiserver status = Running (err=<nil>)
	I0810 22:39:11.375171  422928 status.go:255] multinode-20210810223625-345780 status: &{Name:multinode-20210810223625-345780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0810 22:39:11.375188  422928 status.go:253] checking status of multinode-20210810223625-345780-m02 ...
	I0810 22:39:11.375434  422928 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780-m02 --format={{.State.Status}}
	I0810 22:39:11.416051  422928 status.go:328] multinode-20210810223625-345780-m02 host status = "Running" (err=<nil>)
	I0810 22:39:11.416085  422928 host.go:66] Checking if "multinode-20210810223625-345780-m02" exists ...
	I0810 22:39:11.416421  422928 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210810223625-345780-m02
	I0810 22:39:11.457152  422928 host.go:66] Checking if "multinode-20210810223625-345780-m02" exists ...
	I0810 22:39:11.457553  422928 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:39:11.457611  422928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210810223625-345780-m02
	I0810 22:39:11.497045  422928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33052 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223625-345780-m02/id_rsa Username:docker}
	I0810 22:39:11.577540  422928 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:39:11.586492  422928 status.go:255] multinode-20210810223625-345780-m02 status: &{Name:multinode-20210810223625-345780-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0810 22:39:11.586535  422928 status.go:253] checking status of multinode-20210810223625-345780-m03 ...
	I0810 22:39:11.586880  422928 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780-m03 --format={{.State.Status}}
	I0810 22:39:11.627355  422928 status.go:328] multinode-20210810223625-345780-m03 host status = "Stopped" (err=<nil>)
	I0810 22:39:11.627384  422928 status.go:341] host is not running, skipping remaining checks
	I0810 22:39:11.627397  422928 status.go:255] multinode-20210810223625-345780-m03 status: &{Name:multinode-20210810223625-345780-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.56s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 node start m03 --alsologtostderr
multinode_test.go:235: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210810223625-345780 node start m03 --alsologtostderr: (30.02693593s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (137.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210810223625-345780
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210810223625-345780
E0810 22:39:57.508742  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
multinode_test.go:271: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210810223625-345780: (42.503418005s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210810223625-345780 --wait=true -v=8 --alsologtostderr
multinode_test.go:276: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210810223625-345780 --wait=true -v=8 --alsologtostderr: (1m34.868096508s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210810223625-345780
--- PASS: TestMultiNode/serial/RestartKeepsNodes (137.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210810223625-345780 node delete m03: (4.802205179s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 status --alsologtostderr
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (41.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 stop
E0810 22:42:13.665839  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
E0810 22:42:41.349351  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210810223625-345780 stop: (41.077890374s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210810223625-345780 status: exit status 7 (136.199213ms)

                                                
                                                
-- stdout --
	multinode-20210810223625-345780
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210810223625-345780-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210810223625-345780 status --alsologtostderr: exit status 7 (138.858911ms)

                                                
                                                
-- stdout --
	multinode-20210810223625-345780
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210810223625-345780-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:42:46.764809  435539 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:42:46.764910  435539 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:42:46.764932  435539 out.go:311] Setting ErrFile to fd 2...
	I0810 22:42:46.764940  435539 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:42:46.765055  435539 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:42:46.765241  435539 out.go:305] Setting JSON to false
	I0810 22:42:46.765263  435539 mustload.go:65] Loading cluster: multinode-20210810223625-345780
	I0810 22:42:46.765591  435539 status.go:253] checking status of multinode-20210810223625-345780 ...
	I0810 22:42:46.765988  435539 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780 --format={{.State.Status}}
	I0810 22:42:46.805940  435539 status.go:328] multinode-20210810223625-345780 host status = "Stopped" (err=<nil>)
	I0810 22:42:46.805971  435539 status.go:341] host is not running, skipping remaining checks
	I0810 22:42:46.805977  435539 status.go:255] multinode-20210810223625-345780 status: &{Name:multinode-20210810223625-345780 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0810 22:42:46.806027  435539 status.go:253] checking status of multinode-20210810223625-345780-m02 ...
	I0810 22:42:46.806346  435539 cli_runner.go:115] Run: docker container inspect multinode-20210810223625-345780-m02 --format={{.State.Status}}
	I0810 22:42:46.850587  435539 status.go:328] multinode-20210810223625-345780-m02 host status = "Stopped" (err=<nil>)
	I0810 22:42:46.850614  435539 status.go:341] host is not running, skipping remaining checks
	I0810 22:42:46.850622  435539 status.go:255] multinode-20210810223625-345780-m02 status: &{Name:multinode-20210810223625-345780-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (41.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (70.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210810223625-345780 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0810 22:42:59.310562  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
multinode_test.go:335: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210810223625-345780 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.397615415s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223625-345780 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (70.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210810223625-345780
multinode_test.go:433: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210810223625-345780-m02 --driver=docker  --container-runtime=crio
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210810223625-345780-m02 --driver=docker  --container-runtime=crio: exit status 14 (109.894952ms)

                                                
                                                
-- stdout --
	* [multinode-20210810223625-345780-m02] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210810223625-345780-m02' is duplicated with machine name 'multinode-20210810223625-345780-m02' in profile 'multinode-20210810223625-345780'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210810223625-345780-m03 --driver=docker  --container-runtime=crio
multinode_test.go:441: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210810223625-345780-m03 --driver=docker  --container-runtime=crio: (27.76244511s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210810223625-345780
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210810223625-345780: exit status 80 (277.456933ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210810223625-345780
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210810223625-345780-m03 already exists in multinode-20210810223625-345780-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210810223625-345780-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210810223625-345780-m03: (2.885529274s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.09s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.54s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (11.540199798s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.54s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.09s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.088190233s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.09s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (10.34s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.340270957s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (10.34s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.2s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (8.20392086s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.20s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (15.07s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (15.070493252s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (15.07s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (13.93s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (13.92937456s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (13.93s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (15.21s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (15.207881075s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (15.21s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (13.15s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (13.145104194s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (13.15s)

                                                
                                    
x
+
TestInsufficientStorage (13.51s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20210810224943-345780 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20210810224943-345780 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.64858456s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210810224943-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"be6c99a4-703f-4a7d-b6cd-e01dee242001","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig"},"datacontenttype":"application/json","id":"f6d88a01-37c8-4bf2-82dd-09fc9f25c1d7","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"e323adbb-bf8a-420c-b76c-d7f8d29cba53","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube"},"datacontenttype":"application/json","id":"be7f0beb-d8e6-408e-82e6-adcca38584c9","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"a019bedb-18d7-4dca-916b-ad8a6befa69f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"91304db3-4705-45cb-8fe0-ffd7de1943c1","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"7d196759-bfa3-49f7-97fa-ea6c20a9b1b6","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"58214397-43d1-4b2f-bbe8-1f36673bb371","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"8dc69f59-d033-437b-aa24-e75b874195d2","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210810224943-345780 in cluster insufficient-storage-20210810224943-345780","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"b493bfaf-3e01-429b-b47a-c913d81f5349","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"2b0a2ff8-0f42-419a-b976-90547cdcda6e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"484d6528-6c41-497a-ad91-6c0fcf6f7736","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"3f145991-b37b-469c-98bb-a6773dfcf093","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210810224943-345780 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210810224943-345780 --output=json --layout=cluster: exit status 7 (279.918783ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210810224943-345780","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210810224943-345780","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 22:49:50.815020  485780 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210810224943-345780" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210810224943-345780 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210810224943-345780 --output=json --layout=cluster: exit status 7 (299.603436ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210810224943-345780","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210810224943-345780","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0810 22:49:51.115171  485841 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210810224943-345780" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	E0810 22:49:51.127209  485841 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/insufficient-storage-20210810224943-345780/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210810224943-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20210810224943-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20210810224943-345780: (6.281753188s)
--- PASS: TestInsufficientStorage (13.51s)

                                                
                                    
x
+
TestMissingContainerUpgrade (130.14s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.241559279.exe start -p missing-upgrade-20210810225147-345780 --memory=2200 --driver=docker  --container-runtime=crio
E0810 22:52:13.665635  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Done: /tmp/minikube-v1.9.1.241559279.exe start -p missing-upgrade-20210810225147-345780 --memory=2200 --driver=docker  --container-runtime=crio: (1m17.222310813s)
version_upgrade_test.go:320: (dbg) Run:  docker stop missing-upgrade-20210810225147-345780
version_upgrade_test.go:325: (dbg) Run:  docker rm missing-upgrade-20210810225147-345780
version_upgrade_test.go:331: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20210810225147-345780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:331: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20210810225147-345780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.890813661s)
helpers_test.go:176: Cleaning up "missing-upgrade-20210810225147-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20210810225147-345780
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20210810225147-345780: (3.068707005s)
--- PASS: TestMissingContainerUpgrade (130.14s)

                                                
                                    
x
+
TestPause/serial/Start (104.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210810225233-345780 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210810225233-345780 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m44.83789467s)
--- PASS: TestPause/serial/Start (104.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210810225249-345780 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210810225249-345780 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (252.623823ms)

                                                
                                                
-- stdout --
	* [false-20210810225249-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:52:49.237919  519643 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:52:49.238024  519643 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:52:49.238028  519643 out.go:311] Setting ErrFile to fd 2...
	I0810 22:52:49.238031  519643 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:52:49.238147  519643 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:52:49.238423  519643 out.go:305] Setting JSON to false
	I0810 22:52:49.276435  519643 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":9331,"bootTime":1628626639,"procs":242,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:52:49.276566  519643 start.go:121] virtualization: kvm guest
	I0810 22:52:49.279583  519643 out.go:177] * [false-20210810225249-345780] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:52:49.279746  519643 notify.go:169] Checking for updates...
	I0810 22:52:49.281321  519643 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:52:49.282888  519643 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:52:49.284431  519643 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:52:49.285923  519643 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:52:49.286647  519643 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:52:49.338496  519643 docker.go:132] docker version: linux-19.03.15
	I0810 22:52:49.338633  519643 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0810 22:52:49.425927  519643 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:68 SystemTime:2021-08-10 22:52:49.376751756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0810 22:52:49.426062  519643 docker.go:244] overlay module found
	I0810 22:52:49.428378  519643 out.go:177] * Using the docker driver based on user configuration
	I0810 22:52:49.428414  519643 start.go:278] selected driver: docker
	I0810 22:52:49.428423  519643 start.go:751] validating driver "docker" against <nil>
	I0810 22:52:49.428448  519643 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0810 22:52:49.428503  519643 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0810 22:52:49.428525  519643 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0810 22:52:49.431189  519643 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0810 22:52:49.433091  519643 out.go:177] 
	W0810 22:52:49.433216  519643 out.go:242] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0810 22:52:49.434773  519643 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210810225249-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210810225249-345780
--- PASS: TestNetworkPlugins/group/false (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (115.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210810225417-345780 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210810225417-345780 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0: (1m55.743158453s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (115.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.33s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210810225233-345780 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210810225233-345780 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.310246934s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.33s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210810225233-345780 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210810225233-345780 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210810225233-345780 --output=json --layout=cluster: exit status 2 (336.340428ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210810225233-345780","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210810225233-345780","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210810225233-345780 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (131.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210810225439-345780 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210810225439-345780 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (2m11.415442787s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (131.42s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.7s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210810225233-345780 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210810225233-345780 --alsologtostderr -v=5: (4.698097406s)
--- PASS: TestPause/serial/DeletePaused (4.70s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.89s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:139: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.781734999s)
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210810225233-345780
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210810225233-345780: exit status 1 (58.99984ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210810225233-345780

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (3.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210810225510-345780 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210810225510-345780 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (1m17.166156113s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210810225417-345780 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [29a1ab2e-fa2e-11eb-84f4-0242370db299] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [29a1ab2e-fa2e-11eb-84f4-0242370db299] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.011300792s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210810225417-345780 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210810225417-345780 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210810225417-345780 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:188: (dbg) Done: kubectl --context old-k8s-version-20210810225417-345780 describe deploy/metrics-server -n kube-system: (1.269434687s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210810225417-345780 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210810225417-345780 --alsologtostderr -v=3: (20.912426101s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210810225510-345780 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [009909e4-5ec1-482a-a31c-6911904a6501] Pending
helpers_test.go:340: "busybox" [009909e4-5ec1-482a-a31c-6911904a6501] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [009909e4-5ec1-482a-a31c-6911904a6501] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.013869629s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210810225510-345780 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210810225510-345780 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210810225510-345780 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210810225417-345780 -n old-k8s-version-20210810225417-345780
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210810225417-345780 -n old-k8s-version-20210810225417-345780: exit status 7 (105.339135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210810225417-345780 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (634.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210810225417-345780 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210810225417-345780 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0: (10m34.62484866s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210810225417-345780 -n old-k8s-version-20210810225417-345780
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (634.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210810225439-345780 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [11f870f8-8da2-4471-9aa4-7fb8f237f053] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [11f870f8-8da2-4471-9aa4-7fb8f237f053] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.012915208s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210810225439-345780 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210810225439-345780 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210810225439-345780 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.033574449s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210810225439-345780 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-5d8978d65d-bcnxv" [fa9c9dc9-fa2e-11eb-a460-0242c0a84302] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012094377s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-5d8978d65d-bcnxv" [fa9c9dc9-fa2e-11eb-a460-0242c0a84302] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006206306s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210810225417-345780 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210810225417-345780 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210810225417-345780 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210810225417-345780 -n old-k8s-version-20210810225417-345780
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210810225417-345780 -n old-k8s-version-20210810225417-345780: exit status 2 (326.315996ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210810225417-345780 -n old-k8s-version-20210810225417-345780
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210810225417-345780 -n old-k8s-version-20210810225417-345780: exit status 2 (331.179866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20210810225417-345780 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210810225417-345780 -n old-k8s-version-20210810225417-345780
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210810225417-345780 -n old-k8s-version-20210810225417-345780
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (69.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210810230738-345780 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3
E0810 23:07:59.313090  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210810230738-345780 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (1m9.835040954s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (69.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210810230738-345780 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [054dd9ab-b8bf-4a82-a10d-262d42ca8c28] Pending
helpers_test.go:340: "busybox" [054dd9ab-b8bf-4a82-a10d-262d42ca8c28] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [054dd9ab-b8bf-4a82-a10d-262d42ca8c28] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.01289307s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210810230738-345780 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210810230738-345780 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210810230738-345780 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210810232643-345780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
E0810 23:26:56.711552  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210810232643-345780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (53.20865706s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210810225248-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=crio
E0810 23:27:13.665401  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210810225248-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=crio: (1m31.965221361s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210810232643-345780 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0810 23:27:36.939708  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (17.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210810232643-345780 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210810232643-345780 --alsologtostderr -v=3: (17.467460498s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (17.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210810232643-345780 -n newest-cni-20210810232643-345780
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210810232643-345780 -n newest-cni-20210810232643-345780: exit status 7 (101.700822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210810232643-345780 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210810232643-345780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
E0810 23:27:59.310576  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810222001-345780/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210810232643-345780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (25.667408238s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210810232643-345780 -n newest-cni-20210810232643-345780
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210810232643-345780 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210810232643-345780 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210810232643-345780 -n newest-cni-20210810232643-345780
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210810232643-345780 -n newest-cni-20210810232643-345780: exit status 2 (325.756745ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210810232643-345780 -n newest-cni-20210810232643-345780
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210810232643-345780 -n newest-cni-20210810232643-345780: exit status 2 (326.62342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20210810232643-345780 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210810232643-345780 -n newest-cni-20210810232643-345780
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210810232643-345780 -n newest-cni-20210810232643-345780
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (75.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210810225249-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210810225249-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=crio: (1m15.073766806s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (75.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210810225248-345780 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210810225248-345780 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-q5h2p" [1da4291f-17cb-47f6-b23e-594d6d60972d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-q5h2p" [1da4291f-17cb-47f6-b23e-594d6d60972d] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.007385122s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210810225248-345780 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210810225248-345780 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210810225248-345780 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (76.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210810225249-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210810225249-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=crio: (1m16.405187869s)
--- PASS: TestNetworkPlugins/group/cilium/Start (76.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210810225249-345780 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210810225249-345780 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-v5c26" [59b43572-9ba5-4e5e-bbf0-dd29b4f7cc65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-v5c26" [59b43572-9ba5-4e5e-bbf0-dd29b4f7cc65] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 10.006614199s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:340: "cilium-r8cn7" [b20285b3-ab2a-4887-9c53-6deeda053685] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.014023684s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210810225249-345780 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210810225249-345780 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-t7gh4" [514549b7-fa68-47a9-9d39-ef5907d13160] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-t7gh4" [514549b7-fa68-47a9-9d39-ef5907d13160] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.00732393s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210810225249-345780 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210810225249-345780 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210810225249-345780 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (49.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210810225248-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210810225248-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=crio: (49.809528943s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (49.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (97.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210810225249-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=crio
E0810 23:31:13.893384  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210810225417-345780/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210810225249-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=crio: (1m37.178617784s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (97.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210810225248-345780 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210810225248-345780 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-wh8dw" [b02a1306-9515-4a19-9044-dab6d6385fb9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-wh8dw" [b02a1306-9515-4a19-9044-dab6d6385fb9] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.006161154s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210810225248-345780 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210810225248-345780 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210810225248-345780 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210810225248-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=crio
E0810 23:32:13.665494  345780 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-342823-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222942-345780/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210810225248-345780 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=crio: (50.196302446s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210810225248-345780 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210810225248-345780 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-7nml7" [31909a4e-db78-4e26-8a7e-f0c80b3a3dc0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:340: "netcat-66fbc655d5-7nml7" [31909a4e-db78-4e26-8a7e-f0c80b3a3dc0] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.006790333s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:340: "kindnet-8qgt9" [00e5af1a-577e-41c4-90dc-ee1df1523e61] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.011697089s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210810225249-345780 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210810225248-345780 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210810225248-345780 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210810225248-345780 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (24/237)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210810230738-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210810230738-345780
--- SKIP: TestStartStop/group/disable-driver-mounts (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as crio container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210810225248-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210810225248-345780
--- SKIP: TestNetworkPlugins/group/kubenet (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210810225248-345780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20210810225248-345780
--- SKIP: TestNetworkPlugins/group/flannel (0.46s)

                                                
                                    
Copied to clipboard